Conventions (and criticism against them) all have a reason behind them, so let’s run down some reasons behind conventions
-
Interfaces are prefixed as I to differentiate interface types from implementations – e.g., as mentioned above there needs to be an easy way to distinguish between
Thing
and its interfaceIThing
so the convention serves to this end. -
Interfaces are prefixed I to differentiate it from abstract classes – There is ambiguity when you see the following code:
public class Apple: Fruit
Without the convention one wouldn’t know if
Apple
was inheriting from another class namedFruit
, or if it were an implementation of an interface namedFruit
, whereasIFruit
will make this obvious:public class Apple: IFruit
Principle of least surprise applies.
-
Not all uses of hungarian notation are censured – Early uses of Hungarian notation signified a prefix which indicated the type of the object and then followed by the variable name or sometimes an underscore before the variable name. This was, for certain programming environments (think Visual Basic 4 – 6) useful but as true object-oriented programming grew in popularity it became impractical and redundant to specify the type. This became especially issue when it came to intellisense.
Today hungarian notation is acceptable to distinguish UI elements from actual data and similarly associated UI elements, e.g.,
txtObject
for a textbox,lblObject
for the label that is associated with that textbox, while the data for the textbox is simplyObject
.I also have to point out that the original use of Hungarian notation wasn’t for specifying data types (called System Hungarian Notation) but rather, specifying the semantic use of a variable name (called Apps Hungarian Notation). Read more on it on the wikipedia entry on Hungarian Notation.