How does a .NET decimal
type get represented in binary in memory?
We all know how floating-point numbers are stored and the thusly the reasons for the inaccuracy thereof, but I can't find any information about decimal
except the following:
- Apparently more accurate than floating-point numbers
- Takes 128 bits of memory
- 2^96 + sign range
- 28 (sometimes 29?) total significant digits in the number
Is there any way I can figure this out? The computer scientist in me demands the answer and after an hour of attempted research, I cannot find it. It seems like there's either a lot of wasted bits or I'm just picturing this wrong in my head. Can anyone shed some light on this please?