TL/DR - the exact size is up the compiler.
The Standard requires that a type be able to represent a minimum range of values - for example, an unsigned char
must be able to represent at least the range [0..255]
, an int
must be able to represent at least the range [-32767...32767]
, etc.
That minimum range defines a minimum number of bits - you need at least 16 bits to represent the range [-32767..32767]
(some systems may use padding bits or parity bits that are part of the word, but not used to represent the value).
Other architectural considerations come into play - int
is usually set to be the same size as the native word size. So on a 16-bit system, int
would (usually) be 16 bits, while on a 32-bit system it would be 32 bits. So, ultimately, it comes down the the compiler.
However, it's possible to have one compiler on a 32-bit system use a 16-bit int
, while another uses a 32-bit int
. That led to a wasted afternoon back in the mid-90s where I had written some code that assumed a 32-bit int
that worked fine under one compiler but broke the world under a different compiler on the same hardware.
So, lesson learned - never assume that a type can represent values outside of the minimum guaranteed by the Standard. Either check against the contents of limits.h
and float.h
to see if the type is big enough, or use one of the sized types from stdint.h
(int32_t
, uint8_t
, etc.).
long int
s when compiling Linux x86_64 binaries. On the other hand, they all use 4-bytelong int
s when compiling Linux x86 binaries. It's not about the form or host of the compiler, it's about the target. – Knickerbockerlong
as 32-bit (4 bytes). – Analogysizeof(char) == 1
,sizeof(short)*CHAR_BIT >= 16
,sizeof(int)*CHAR_BIT >= 16
,sizeof(long)*CHAR_BIT >= 32
. – Amorita