In C and C++, the behavior of signed integer overflow or underflow is undefined.
In Java and C# (unchecked contexts), the behavior seems to be defined to an extent.
From the Java specification, we have:
The integer operators do not indicate overflow or underflow in any way.
And:
The Java programming language uses two's-complement representation for integers [...]
From the C# specification, we have:
[...] In an unchecked context, overflows are ignored and any high-order bits that do not fit in the destination type are discarded.
By testing out both, I got the expected wrap-around result. Judging from the wording of the specs, I get the feeling that in Java the result is portable (because the language requires a 2's complement representation) while C# may or may not have that result (as it doesn't seem to specify a representation - only that the higher order bits are discarded).
So, do both language specifications guarantee the same behavior on all platforms (just with different wording)? Or do they simply happen to be the same with each other in my test case (on a x86 and under Sun's JRE and Microsoft's .NET), but could theoretically differ on other architectures or implementations?