Looking at this C# code:
byte x = 1;
byte y = 2;
byte z = x + y; // ERROR: Cannot implicitly convert type 'int' to 'byte'
The result of any math performed on byte
(or short
) types is implicitly cast back to an integer. The solution is to explicitly cast the result back to a byte:
byte z = (byte)(x + y); // this works
What I am wondering is why? Is it architectural? Philosophical?
We have:
int
+int
=int
long
+long
=long
float
+float
=float
double
+double
=double
So why not:
byte
+byte
=byte
short
+short
=short
?
A bit of background: I am performing a long list of calculations on "small numbers" (i.e. < 8) and storing the intermediate results in a large array. Using a byte array (instead of an int array) is faster (because of cache hits). But the extensive byte-casts spread through the code make it that much more unreadable.
byte1 | byte2
is not at all treating them as numbers. This is treating them precisely as patterns of bits. I understand your point of view, but it just so happens that every single time I did any arithmetic on bytes in C#, I was actually treating them as bits, not numbers, and this behaviour is always in the way. – Ewanbyte
s. – Beadsman