As others have mentioned, the C language standard defines the type of a character constant to be int
. The historical reason for this is that C, and its predecessor B, were originally developed on DEC PDP minicomputers with various word sizes, which supported 8-bit ASCII but could only perform arithmetic on registers. Early versions of C defined int
to be the native word size of the machine, and any value smaller than an int
needed to be widened to int
in order to be passed to or from a function, or used in a bitwise, logical or arithmetic expression, because that was how the underlying hardware worked.
That is also why the integer promotion rules still say that any data type smaller than an int
is promoted to int
. C implementations are also allowed to use one’s-complement math instead of two’s-complement for similar historical reasons, and the fact that character escapes default to octal and octal constants start with just 0
and hex needs \x
or 0x
is that those early DEC minicomputers had word sizes divisible into three-byte chunks but not four-byte nibbles.
Automatic promotion to int
causes nothing but trouble today. (How many programmers are aware that multiplying two uint32_t
expressions together is undefined behavior, because some implementations define int
as 64 bits wide, the language requires that any type of lower rank than int
must be promoted to a signed int
, the result of multiplying two int
multiplicands has type int
, the multiplication can overflow a signed 64-bit product, and this is undefined behavior?) But that’s the reason C and C++ are stuck with it.
%zu
assizeof
returnssize_t
notint
– Afghanistan