Strange definition of FALSE and TRUE, why? [duplicate]

The intent appears to be portability.

#define FALSE (1 != 1) // why not just define it as "false" or "0"?
#define TRUE (!FALSE)  // why not just define it as "true" or "1"?

These have boolean type in languages that support it (C++), while providing still-useful numeric values for those that don’t (C — even C99 and C11, apparently, despite their acquisition of explicit boolean datatypes).

Having booleans where possible is good for function overloading.

#define myUInt32 unsigned integer // why not just use uint32_t from stdint?

That’s fine if stdint is available. You may take such things for granted, but it’s a big wide world out there! This code recognises that.

Disclaimer: Personally, I would stick to the standards and simply state that compilers released later than 1990 are a pre-requisite. But we don’t know what the underlying requirements are for the project in question.

TRWTF is that the author of the code in question did not explain this in comments alongside.

Leave a Comment

Hata!: SQLSTATE[HY000] [1045] Access denied for user 'divattrend_liink'@'localhost' (using password: YES)