These lines in C#
decimal a = 2m;
decimal b = 2.0m;
decimal c = 2.00000000m;
decimal d = 2.000000000000000000000000000m;
Console.WriteLine(a);
Console.WriteLine(b);
Console.WriteLine(c);
Console.WriteLine(d);
Generates this output:
2
2.0
2.00000000
2.000000000000000000000000000
So I can see that creating a decimal variable from a literal allows me to control the precision.
- Can I adjust the precision of decimal variables without using literals?
- How can I create b from a? How can I create b from c?