A point of confusion occurs in thinking the -
is part of the numeric constant.
In the below code 0x80000000
is the numeric constant. Its type is determine only on that. The -
is applied afterward and does not change the type.
#define INT32_MIN (-0x80000000)
long long bal = 0;
if (bal < INT32_MIN )
Raw unadorned numeric constants are positive.
If it is decimal, then the type assigned is first type that will hold it: int
, long
, long long
.
If the constant is octal or hexadecimal, it gets the first type that holds it: int
, unsigned
, long
, unsigned long
, long long
, unsigned long long
.
0x80000000
, on OP's system gets the type of unsigned
or unsigned long
. Either way, it is some unsigned type.
-0x80000000
is also some non-zero value and being some unsigned type, it is greater than 0. When code compares that to a long long
, the values are not changed on the 2 sides of the compare, so 0 < INT32_MIN
is true.
An alternate definition avoids this curious behavior
#define INT32_MIN (-2147483647 - 1)
Let us walk in fantasy land for a while where int
and unsigned
are 48-bit.
Then 0x80000000
fits in int
and so is the type int
. -0x80000000
is then a negative number and the result of the print out is different.
[Back to real-word]
Since 0x80000000
fits in some unsigned type before a signed type as it is just larger than some_signed_MAX
yet within some_unsigned_MAX
, it is some unsigned type.
CHAR_BIT * sizeof(int)
? – Renaldorenard-0x80000000
, but false for-0x80000000L
,-2147483648
and-2147483648L
(gcc 4.1.2), so the question is: why is the int literal-0x80000000
different from the int literal-2147483648
? – Concave<limits.h>
definesINT_MIN
as(-2147483647 - 1)
, now you know why. – Cantor-0x80000000
– Perforated