I've been searching around for hours and I can't find a simple way of accomplishing the following.
Value 1 = 0.00531
Value 2 = 0.051959
Value 3 = 0.0067123
I want to increment each value by its smallest decimal point (however, the number must maintain the exact number of decimal points as it started with and the number of decimals varies with each value, hence my trouble).
Value 1 should be: 0.00532
Value 2 should be: 0.051960
Value 3 should be: 0.0067124
Does anyone know of a simple way of accomplishing the above in a function that can still handle any number of decimals?
Thanks.
0.1
is for instance not a representable float. It is approximated with something0.099999...
ish, or0.1000...
ish. So it would be terribly unstable. – Argosy0.00532
, there might actually be more decimal digits than what you see, so you might get different results than you expect if you're not seeing the whole representation. – Matrix.1 + .2 == 0.30000000000000004
. What's the "true" last digit here? – Nucleo