A star schema really lies at the intersection of the relational model of data and the dimensional model of data. It’s really a way of starting with a dimensional model, and mapping it into SQL tables that somewhat resemble the SQL tables you get if you start from a relational model.
I say somewhat resemble because many relational design methodologies result in a normalized design, or at least a nearly normalized design. A star schema will have significant departures from full normalization.
Every departure from full normalization carries with it a consequent data update anomaly. (I’m including anomlaies on insert, update and delete operations under one umbrella). Those anomalies don’t have anything to do with what data model you started with.
The comment on OLTP versus OLAP is relevant here. Update anomalies will have different impacts on performance and/or programming difficulty in those two situations.
In addition to a star schema in an SQL databaase, there are dimensional database products out there that store data in a physical form that is unique to that product. With those products, you don’t see a star schema so much as you see a direct implementation of the dimensional model, and an interface that might be peculiar to the product. Some of those interfaces allow OLAP operations to be completely point-and-click.
Just as a digression from your question, I once built a star schema as an intermediate step between an OLTP database that supported a transaction based application and a datacube inside Cognos PowerPlay. Using standard ETL techniques, the combined transfer from the OLTP database to the star schema and then from the star schema to the data cube actually outperformed the direct transfer from the OLTP database to the datacube. This was an unexpected result.
Hope this helps.