In this answer, zwol made this claim:
The correct way to convert two bytes of data from an external source into a 16-bit signed integer is with helper functions like this:
#include <stdint.h>
int16_t be16_to_cpu_signed(const uint8_t data[static 2]) {
uint32_t val = (((uint32_t)data[0]) << 8) |
(((uint32_t)data[1]) << 0);
return ((int32_t) val) - 0x10000u;
}
int16_t le16_to_cpu_signed(const uint8_t data[static 2]) {
uint32_t val = (((uint32_t)data[0]) << 0) |
(((uint32_t)data[1]) << 8);
return ((int32_t) val) - 0x10000u;
}
Which of the above functions is appropriate depends on whether the array contains a little endian or a big endian representation. Endianness is not the issue at question here, I am wondering why zwol subtracts 0x10000u
from the uint32_t
value converted to int32_t
.
Why is this the correct way?
How does it avoid the implementation defined behavior when converting to the return type?
Since you can assume 2's complement representation, how would this simpler cast fail: return (uint16_t)val;
What is wrong with this naive solution:
int16_t le16_to_cpu_signed(const uint8_t data[static 2]) {
return (uint16_t)data[0] | ((uint16_t)data[1] << 8);
}
int16_t
is implementation-defined, so the naive approach isn't portable. – Storferint16_t
– Plasmasol0xFFFF0001u
can't be represented asint16_t
, and in the second approach0xFFFFu
can't be represented asint16_t
. – Supporterint16_t
andint32_t
are mandated to use 2's complement representation without padding bits. We are in known territory :) – Hydroxideintxx_t
anduintxx_t
would be a welcome improvement. I cannot imagine a downside for this. – Hydroxideint16_t table[256][256];
– Strickle