I experimented today with how the compiler determines the types for numbers declared as var
.
var a = 255; //Type = int. Value = byte.MaxValue. Why isn't this byte?
var b = 32767; //Type = int. Value = short.MaxValue. Why isn't this short?
var c = 2147483647; //Type = int. Value = int.MaxValue. int as expected.
var d = 2147483648; //Type = uint. Value = int.MaxValue + 1. uint is fine but could have been long?
var e = 4294967296; //Type = long. Value = uint.MaxValue + 1. Type is long as expected.
Why is int
the default for any number that is between Int32.MinValue
and Int32.MaxValue
?
Wouldn't it be better to use the smallest possible data type to save memory? (I understand that these days memory is cheap, but still, saving memory isn't that bad especially if it's so easy to do).
If the compiler did use the smallest data type, and if you had a variable with 255 and knew that later on you would want to store a value like 300, then the programmer could just declare it short
instead of using var
.
Why is var d = 2147483648
implicitly uint
and not long
?
Seems as though the compiler will always try and use a 32 bit integer if it can, first signed, then unsigned, then long
.