By my reading of the C++ Standard, I have always understood that the sizes of the integral fundamental types in C++ were as follows:
sizeof(char) <= sizeof(short int) <= sizeof(int) <= sizeof(long int)
I deduced this from 3.9.1/2:
- There are four signed integer types: “signed char”, “short int”, “int”, and “long int.” In this list, each type provides at least as much storage as those preceding it in the list. Plain ints have the natural size suggested by the architecture of the execution environment
Further, the size of char
is described by 3.9.1/ as being:
- [...] large enough to store any member of the implementation’s basic character set.
1.7/1 defines this in more concrete terms:
- The fundamental storage unit in the C + + memory model is the byte. A byte is at least large enough to contain any member of the basic execution character set and is composed of a contiguous sequence of bits, the number of which is implementation-defined.
This leads me to the following conclusion:
1 == sizeof(char) <= sizeof(short int) <= sizeof(int) <= sizeof(long int)
where sizeof
tells us how many bytes the type is. Furthermore, it is implementation-defined how many bits are in a byte. Most of us are probably used to dealing with 8-bit bytes, but the Standard says there are n
bits in a byte.
In this post, Alf P. Steinbach says:
long is guaranteed (at least) 32 bits.
This flies in the face of everything I understand the size of the fundamental types to be in C++ according to the Standard. Normally I would just discount this statement as a beginner being wrong, but since this was Alf I decided it was worth investigating further.
So, what say you? Is a long guaranteed by the standard to be at least 32 bits? If so, please be specific as to how this guarantee is made. I just don't see it.
The C++ Standard specifically says that in order to know C++ you must know C (1.2/1) 1
The C++ Standard implicitly defines the minimum limit on the values a
long
can accommodate to beLONG_MIN
-LONG_MAX
2
So no matter how big a long
is, it has to be big enough to hold LONG_MIN to LONG_MAX.
But Alf and others are specific that a long must be at least 32 bits. This is what I'm trying to establish. The C++ Standard is explicit that the number of bits in a byte are not specified (it could be 4, 8, 16, 42) So how is the connection made from being able to accommodate the numbers LONG_MIN-LONG_MAX
to being at least 32 bits?
(1) 1.2/1: The following referenced documents are indispensable for the application of this document. For dated references, only the edition cited applies. For undated references, the latest edition of the referenced document (including any amendments) applies.
- ISO/IEC 2382 (all parts), Information technology – Vocabulary
- ISO/IEC 9899:1999, Programming languages – C
- ISO/IEC 10646-1:2000, Information technology – Universal Multiple-Octet Coded Character Set (UCS) – Part 1: Architecture and Basic Multilingual Plane
(2) Defined in <climits>
as:
LONG_MIN -2147483647 // -(2^31 - 1)
LONG_MAX +2147483647 // 2^31 - 1
(2^32)-1
distinct values in fewer than 32 bits, then there might not be 32 bits in a long. However, on any binary platform, as long as mathematics is valid, you will have 32 bits. – Henkelsizeof(long) * CHAR_BITS >= 32
– ProfessionLONG_MAX
having a value specified by the standard -- I thought the point of such a macro was to enable implementation definition. – Equiprobablesizeof (int) <= sizeof (long int)
;int
could have additional padding bits that could make it bigger thanlong int
. I'm not sure whether this applies to C++. In any case, no sane implementation would do this. – Estuary