I have code that runs on different platforms that seems to get different results. I am looking for a proper explanation.
I expected casting to unsigned
to work the same for float
or double
as for int
1.
Windows :
double dbl = -123.45;
int d_cast = (unsigned int)dbl;
// d_cast == -123
WinCE (ARM):
double dbl = -123.45;
int d_cast = (unsigned int)dbl;
// d_cast == 0
EDIT:
Thanks for pointing in the right direction.
fix workaround
double dbl = -123.45;
int d_cast = (unsigned)(int)dbl;
// d_cast == -123
// works on both.
Footnote 1: Editor's note: converting an out-of-range unsigned
value to a signed type like int
is implementation defined (not undefined). C17 § 6.3.1.3 - 3.
So the assignment to d_cast
is also not nailed down by the standard for cases where (unsigned)dbl
ends up being a huge positive value on some particular implementation. (That path of execution contains UB so ISO C is already out the window in theory). In practice compilers do what we expect on normal 2's complement machines and leave the bit-pattern unchanged.