I understand that with c++20 sign magnitude and one's comp are finally being phased out in favor of standardizing two's comp. (see http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2018/p0907r3.html, and http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2018/p1236r1.html) I was wondering what this meant for the implications of how much we can make assumptions about the binary representation of integers now in c++20? As I'm reading it, it seems like a lot of thought has been put into the allowed ranges, but I don't see anything that would really indicate requirements on the bit layout, nor endianness. I would thus assume that endianness is still an issue, but what about bit layout?
according to the standard, is 0b00000001 == 1
always true for an int8_t? What about 0b11111111 == -1
I understand that on nearly all practical systems, the leftmost bit will be the most significant, decreasing incrementally until the rightmost and least significant byte is reached, and all systems I've tested this on seem to use this representation, but does the standard say anything about this and any guarantees we get? Or would it be safer to use a 256 element lookup table to map each value a byte can represent to a specific bit representation explicitly if we need to know the underlying representation rather than relying on this? I'd rather not take the performance hit of a lookup if I can use the bytes directly as is, but I'd also like to make sure that my code isn't making too many assumptions as portability is important.
0b11111111
? Or, put another way, what other value could0b111111111
represent, if not -1? That's the point of the standardization, as far as I can tell. – Arena00000001
->11111110
->11111111
. – Arena0 at 0b00000000
rather than some offset. Those are my biggest concerns that I can think of. Really I suppose I'm asking how the object representation of an int maps to the value representation as the standard is concerned in c++20. – Alard