Why does the C# compiler translate this != comparison as if it were a > comparison?

Short answer:

There is no “compare-not-equal” instruction in IL, so the C# != operator has no exact correspondence and cannot be translated literally.

There is however a “compare-equal” instruction (ceq, a direct correspondence to the == operator), so in the general case, x != y gets translated like its slightly longer equivalent (x == y) == false.

There is also a “compare-greater-than” instruction in IL (cgt) which allows the compiler to take certain shortcuts (i.e. generate shorter IL code), one being that inequality comparisons of objects against null, obj != null, get translated as if they were “obj > null“.

Let’s go into some more detail.

If there is no “compare-not-equal” instruction in IL, then how will the following method get translated by the compiler?

static bool IsNotEqual(int x, int y)
{
    return x != y;
}

As already said above, the compiler will turn the x != y into (x == y) == false:

.method private hidebysig static bool IsNotEqual(int32 x, int32 y) cil managed 
{
    ldarg.0   // x
    ldarg.1   // y
    ceq
    ldc.i4.0  // false
    ceq       // (note: two comparisons in total)
    ret
}

It turns out that the compiler does not always produce this fairly long-winded pattern. Let’s see what happens when we replace y with the constant 0:

static bool IsNotZero(int x)
{
    return x != 0;
}

The IL produced is somewhat shorter than in the general case:

.method private hidebysig static bool IsNotZero(int32 x) cil managed 
{
    ldarg.0    // x
    ldc.i4.0   // 0
    cgt.un     // (note: just one comparison)
    ret
}

The compiler can take advantage of the fact that signed integers are stored in two’s complement (where, if the resulting bit patterns are interpreted as unsigned integers — that’s what the .un means — 0 has the smallest possible value), so it translates x == 0 as if it were unchecked((uint)x) > 0.

It turns out the compiler can do just the same for inequality checks against null:

static bool IsNotNull(object obj)
{
    return obj != null;
}

The compiler produces almost the same IL as for IsNotZero:

.method private hidebysig static bool IsNotNull(object obj) cil managed 
{
    ldarg.0
    ldnull   // (note: this is the only difference)
    cgt.un
    ret
}

Apparently, the compiler is allowed to assume that the bit pattern of the null reference is the smallest bit pattern possible for any object reference.

This shortcut is explicitly mentioned in the Common Language Infrastructure Annotated Standard (1st edition from Oct 2003) (on page 491, as a footnote of Table 6-4, “Binary Comparisons or Branch Operations”):

cgt.un is allowed and verifiable on ObjectRefs (O). This is commonly used when comparing an ObjectRef with null (there is no “compare-not-equal” instruction, which would otherwise be a more obvious solution).”

Leave a Comment