If sizeof(int) == sizeof(long), then is INT_MIN == LONG_MIN && INT_MAX == LONG_MAX always true?
Asked Answered
D

1

8

If sizeof(int) == sizeof(long), then is INT_MIN == LONG_MIN && INT_MAX == LONG_MAX always true?

Is there any real existing cases demonstrating "not true"?

UPD. The similar question: Is there any hosted C implementations which have CHAR_BIT > 8?.

Dissertate answered 19/10, 2021 at 13:12 Comment(6)
One could imagine a computer and a mad compiler writer that used 2's complement for ints and sign-magnitude for longs. But no real-world examples.Mccleary
Just curious, why do you want to know if you can rely on that?Potful
@Mccleary Apparently, ones-complement and sign-and-magnitude representations of signed integers are going to be abandoned in the next version of the C standard.Aikido
The New C Standard on page 594 says there were Cray implementations where short was a 32-bit type occupying 64 bits of space. In that case, it might have had sizeof(short) == sizeof(int) but SHORT_MAX < INT_MAX.Pythia
I think it's a safe assumption on any hosted implementation, but that's not a guarantee. C allows implementations to do weird things with type sizes and representations, and there's always some oddball, niche architecture that has to do things differently.Yelp
@NateEldredge wonder how many of those "cray implementations" are still in use...Disordered
C
6

It need not be true. C11 6.2.6.2p2:

  1. For signed integer types, the bits of the object representation shall be divided into three groups: value bits, padding bits, and the sign bit. There need not be any padding bits; signed char shall not have any padding bits.There shall be exactly one sign bit. Each bit that is a value bit shall have the same value as the same bit in the object representation of the corresponding unsigned type (if there are M value bits in the signed type and N in the unsigned type, then M <= N ). If the sign bit is zero, it shall not affect the resulting value. If the sign bit is one, the value shall be modified in one of the following ways:

    • the corresponding value with sign bit 0 is negated (sign and magnitude);
    • the sign bit has the value -(2M) (two's complement);
    • the sign bit has the value -(2M- 1) (ones' complement).

    Which of these applies is implementation-defined, as is whether the value with sign bit 1 and all value bits zero (for the first two), or with sign bit and all value bits 1 (for ones' complement), is a trap representation or a normal value. In the case of sign and magnitude and ones' complement, if this representation is a normal value it is called a negative zero.


Now, the question is "is there any implementation that has different amount of padding bits" or, even as stark mentioned, different representations for different types of integers - it is very hard to prove that there is no such implementation currently in use. But I believe it is very unlikely that one would come across a system like this in real life.

Coagulase answered 19/10, 2021 at 13:21 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.