The actual constants INT_MIN
and INT_MAX
may be a bit confusing. If we look in the C standard (C17 5.2.4.2.1) it says: INT_MIN -32767
and INT_MAX +32767
, meaning these are the utter minimum that must be supported, for a 16 bit int
. For simplicity, all examples below will assume 16 bit int
(32/64 bit two's complement systems of course use 2^31 - 1 and -2^32 so 2147483647
/-2147483648
).
Why these values were picked is for historical reasons. In addition to industry standard two's complement, C also supports two exotic signedness formats: one's complement and signed magnitude. In the latter two we have a negative zero and/or trap representation, giving a possible value range of -32767 to 32767.
But the vast majority of all computers use two's complement, and then INT_MIN
becomes -32768 on a 16 bit system. This is fine with the standard, since it just says that INT_MIN
must be at least -32767. And in two's complement, INT_MAX
is still 32767.
Therefore on a two's complement system, we cannot do int x = INT_MIN; x = -x;
, since INT_MAX
is 32767 and cannot hold the value 32768. We would create an integer overflow, which is undefined behavior - anything can happen, including strange and nonsensical code generation by the compiler.
In the upcoming C23 standard, support for exotic signedness formats will finally get removed from C. And then INT_MIN
will likely become -32768 in the standard as well.
-INT_MIN
is undefined behavior. See: #71081810 – Flangex = INT_MIN
,-x
is also-INT_MIN
" is not strictly true because of potential UB as described by others. But what were you expecting other than1
for the result of(x < y) == (-x > -y)
? Maybe I need coffee.... – Ousel-fwrapv
to get the behavior you expected, or-fsanitize=undefined
to get an explanation. – Anet-fwrapv
). Semi-related:-x>-y
on its own gets optimized differently by GCC for x86-64 vs. AArch64: The Output of the C Program '(-x > -y)' Differs on macOS and Linux - Why and How to Fix It – Letendre