What is the correct way to convert 2 bytes to a signed 16-bit integer?

If int is 16-bit then your version relies on implementation-defined behaviour if the value of the expression in the return statement is out of range for int16_t.

However the first version also has a similar problem; for example if int32_t is a typedef for int, and the input bytes are both 0xFF, then the result of the subtraction in the return statement is UINT_MAX which causes implementation-defined behaviour when converted to int16_t.

IMHO the answer you link to has several major issues .

Leave a Comment