You can find this document on-line: Rationale for International Standard - Programming Languages - C (Revision 5.10, 2003).
Chapter 6.3 (p. 44 - 45) is about conversions
Between the publication of K&R and the development of C89, a serious divergence had occurred among implementations in the evolution of integer promotion rules. Implementations fell into two major camps which may be characterized as unsigned preserving and value preserving.
The difference between these approaches centered on the treatment of unsigned char
and unsigned short
when widened by the integer promotions, but the decision had an impact on the typing of constants as well (see §6.4.4.1).
The unsigned preserving approach calls for promoting the two smaller unsigned types to unsigned int
. This is a simple rule, and yields a type which is independent of execution environment.
The value preserving approach calls for promoting those types to signed int
if that type can properly represent all the values of the original type, and otherwise for promoting those types to unsigned int
.
Thus, if the execution environment represents short
as something smaller than int
, unsigned short
becomes int
; otherwise it becomes unsigned int
. Both schemes give the same answer in the vast majority of cases, and both give the same effective result in even more cases in implementations with two's complement arithmetic and quiet wraparound on signed overflow - that is, in most current implementations. In such implementations, differences between the two only appear when these two conditions are both true:
An expression involving an unsigned char
or unsigned short
produces an int
-wide result in which the sign bit is set, that is, either a unary operation on such a type, or a binary operation in which the other operand is an int
or “narrower” type.
The result of the preceding expression is used in a context in which its signedness is significant:
• sizeof(int) < sizeof(long)
and it is in a context where it must be widened to a long type, or
• it is the left operand of the right-shift operator in an implementation where this shift is defined as arithmetic, or
• it is either operand of /, %, <, <=, >, or >=.
In such circumstances a genuine ambiguity of interpretation arises. The result must be dubbed questionably signed, since a case can be made for either the signed or unsigned interpretation. Exactly the same ambiguity arises whenever an unsigned int
confronts a signed int
across an operator, and the signed int
has a negative value. Neither scheme does any better, or any worse, in resolving the ambiguity of this confrontation. Suddenly, the negative signed int
becomes a very large unsigned int
, which may be surprising, or it may be exactly what is desired by a knowledgeable programmer. Of course, all of these ambiguities can be avoided by a judicious use of casts.
One of the important outcomes of exploring this problem is the understanding that high-quality compilers might do well to look for such questionable code and offer (optional) diagnostics, and that conscientious instructors might do well to warn programmers of the problems of implicit type conversions.
The unsigned preserving rules greatly increase the number of situations where unsigned int
confronts signed int
to yield a questionably signed result, whereas the value preserving rules minimize such confrontations. Thus, the value preserving rules were considered to be safer for the novice, or unwary, programmer. After much discussion, the C89 Committee decided in favor of value preserving rules, despite the fact that the UNIX C compilers had evolved in the direction of unsigned preserving.
QUIET CHANGE IN C89
A program that depends upon unsigned preserving arithmetic conversions will behave differently, probably without complaint. This was considered the most serious semantic change made by the C89 Committee to a widespread current practice.
For reference, you can find more details about those conversions updated to C11 in this answer by Lundin.
char
is one byte, but one byte might be 32 bits (in future system). – Ajaxchar
is preciselyCHAR_BIT
bits wide. I'll concede that on most modern architecturesCHAR_BIT == 8
is true, but you should not assume that to hold universally (now, in the past, or the future). – Sherman