That's indeed an interesting corner case. It only occurs here because you use uint16_t
for the unsigned type when you architecture use 32 bits for ìnt
Here is a extract from Clause 5 Expressions from draft n4296 for C++14 (emphasize mine):
10 Many binary operators that expect operands of arithmetic or enumeration type cause conversions ...
This pattern is called the usual arithmetic conversions, which are defined as follows:
...
(10.5.3) — Otherwise, if the operand that has unsigned integer type has rank greater than or equal to the
rank of the type of the other operand, the operand with signed integer type shall be converted to
the type of the operand with unsigned integer type.
(10.5.4) — Otherwise, if the type of the operand with signed integer type can represent all of the values of
the type of the operand with unsigned integer type, the operand with unsigned integer type shall
be converted to the type of the operand with signed integer type.
You are in the 10.5.4 case:
uint16_t
is only 16 bits while int
is 32
int
can represent all the values of uint16_t
So the uint16_t check = 0x8123U
operand is converted to the signed 0x8123
and result of the bitwise &
is still 0x8123.
But the shift (bitwise so it happens at the representation level) causes the result to be the intermediate unsigned 0x81230000 which converted to an int gives a negative value (technically it is implementation defined, but this conversion is a common usage)
5.8 Shift operators [expr.shift]
...
Otherwise, if E1 has a signed type and non-negative value, and E1×2E2 is representable
in the corresponding unsigned type of the result type, then that value, converted to the result type, is the
resulting value;...
and
4.7 Integral conversions [conv.integral]
...
3 If the destination type is signed, the value is unchanged if it can be represented in the destination type;
otherwise, the value is implementation-defined.
(beware this was true undefined behaviour in C++11...)
So you end with a conversion of the signed int 0x81230000 to an uint64_t
which as expected gives 0xFFFFFFFF81230000, because
4.7 Integral conversions [conv.integral]
...
2 If the destination type is unsigned, the resulting value is the least unsigned integer congruent to the source
integer (modulo 2n where n is the number of bits used to represent the unsigned type).
TL/DR: There is no undefined behaviour here, what causes the result is the conversion of signed 32 bits int to unsigned 64 bits int. The only part part that is undefined behaviour is a shift that would cause a sign overflow but all common implementations share this one and it is implementation defined in C++14 standard.
Of course, if you force the second operand to be unsigned everything is unsigned and you get evidently the correct 0x81230000
result.
[EDIT] As explained by MSalters, the result of the shift is only implementation defined since C++14, but was indeed undefined behaviour in C++11. The shift operator paragraph said:
...
Otherwise, if E1 has a signed type and non-negative value, and E1×2E2 is representable
in the result type, then that is the resulting value; otherwise, the behavior is undefined.
int16_t
. But not foruint16_t
. Are you sure this is the actual code used to produce the results? – Berriman0xFFFFll
(to ensure 64-bit) for the first mask. – Berrimancheck & 0xFFFF
returns0x00008123
,(check & 0xFFFF) << 16
returns0x81230000
in the immediate window, while(uint64_t)((check & 0xFFFF) << 16)
returns0xffffffff81230000
. – Granitewaresigned
in fact names a type, it's shorthand forsigned int
aka justint
. And that is the default type for integral constants such as0xFFFF
. – Clodhoppingint
. But what happens here is that the source has been compiled with 32-bitint
. Where MSB is 1, so, sign extension. – Berriman(check & 0xFFFF)
is typeint
. The subsequent<< 16
is shifting into the sign bit of the that temporary, which is then converted touint64_t
– Hildehildebrand0xFFFFu
touint16_t
. – Clerihew