A point not yet mentioned is that the standard explicitly allows for the possibility that integer representations may contain padding bits. Personally I wish the standards committee would allow a nice easy way for a program to specify certain expected behaviors, and require that any compiler must either honor such specifications or refuse compilation; code which started with an "integers must not have padding bits" specification would then be entitled to assume that to be the case.
As it is, it would be perfectly legitimate (albeit odd) for an implementation to store 35-bit long
values as four 9-bit characters in big-endian format, but use the LSB of the first byte as a parity bit. Under such an implementation, storing 1
into a long
could cause the parity of the overall word to become odd, thus compelling the implementation to store a 1
into the parity bit.
To be sure, such behavior would be odd, but if architectures that use padding are sufficiently notable to justify explicit provisions in the standard, code which would break on such architectures can't really be considered truly "portable".
The code using union
should work correctly on all architectures which can be simply described as "big-endian" or "little-endian" and do not use padding bits. It would be meaningless on some other architectures (and indeed the terms "big-endian" and "little-endian" could be meaningless too).