If x is a 15-decimal-digit integer, then converting x to a JavaScript Number
, dividing by 100, and converting the result to a numeral with 15 significant decimal digits produces exactly x/100. A proof follows.
Notes:
- Converting the result of the division to a number with 15 significant decimal digits yields exactly x/100. The actual result of the division, while it is in the
Number
format, generally will not be exactly x/100. For example, 73/100 yields 0.729999999999999982236431605997495353221893310546875.
- Converting the result of the division to more than 15 significant decimal digits will also not generally yield x/100, as the extra digits may reveal the difference, as shown for .73 above. (And, of course, using fewer digits may be insufficient to represent x/100.) Thus, if it is desired to communicate exactly x/100 to another process, it must be done with exactly 15 significant decimal digits (or some other mitigation for error).
- The proof below applies to 15-digit integers x, not to other 15-significant-decimal-digit numbers (such as numerals with 15 decimal digits followed by one or more zeros or numerals starting with a decimal point followed by some zeros followed by 15 significant digits).
Preliminaries
JavaScript is an implementation of ECMAScript, specified in Ecma-262 and ISO/IEC 16262. In clause 6.1.6, Ecma-262 specifies that the IEEE-754 basic 64-bit binary floating-point format is used for ECMAScript’s Number
type, except that only a single NaN is used. Clause 6.1.6 further describes the arithmetic used, which is essentially IEEE-754 arithmetic with rounding-to-nearest, ties-to-even.
The IEEE-754 basic 64-bit binary floating-point format uses a 53-bit significand.
The Unit of Least Precision (ULP) of a binary floating-point number is the value attributed to the position of the least significant bit in its significand. (Thus, the ULP scales with the exponent.) Measured in ULP, all normal 53-bit significands are in [252 ULP, 253 ULP).
For a 15-significant-digit decimal number, its ULP herein will be the value attributed to the position of the 15th digit position counting down from the leading significant digit.
Lemma
First, we establish the well-known fact that converting a 15-significant-decimal-digit number to Number
and back to 15 significant decimal digits yields the original number, provided the number is within the normal range of the Number
format.
If x is a number of 15 significant decimal digits (not necessarily an integer) within the normal range of the floating-point format (2−1022 ≤ |x| < 21024), then converting x to the nearest value representable in the floating-point format and then converting the result to 15 significant decimal digits produces exactly x, when both conversions are performed with rounding-to-nearest, ties-to-even. To see this, let y be the result of the first conversion. If y differs from x by less than ½ of ULP of x, then x is the 15-significant-digit number nearest y and hence must be the result of the second conversion.
In the first conversion, the result y is at most ½ ULP from x, due to the rounding rule. This is a relative accuracy of at most ½ / 252 (that is, the potential ½ ULP error divided by the least the significand can be, measured in ULP). Thus, y differs from x by at most one part in 253. In the worst case, the digits of x may be 9999999999999 = 1015−1, so the error relative to ULP of x would be (1015−1)/253, which is about .111 times the ULP of x. Thus, y always differs from x by less than ½ of its ULP, so converting y back to 15 significant decimal digits yields x.
Proof
If x is a 15-decimal-digit integer, it is exactly representable in the Number
format, since the Number
format has 53 bits in its significand and is therefore capable of exactly representing all integers up to 253, which is about 9.007e15, which is more than 1015.
Thus, converting x to Number
yields exactly x with no error.
Then, by the rules for rounding arithmetic results, dividing x by 100 yields the representable number closest to x/100. Call this y. Now note that x/100 is a number representable with 15 significant decimal digits. (It could be written in scientific notation as x•10−2 or in source code as the digits of x suffixed by e-2
.) Note that converting x/100 to Number
also yields y, since the conversion yields, just as the division does, the number exactly representable in the Number
format that is closest to x/100. By the lemma, the result of converting x/100 to Number
and back to a 15-significant-decimal-digit number yields x/100, and so the result of converting x to Number
, then dividing by 100, then converting to 15 significant decimal digits also yields x/100.
(27/100)*100 === 27
you gettrue
, because the approximation is below epsilon. Doing(1931/100)*100 == 1931)
you getfalse
. So I suppose that dividing1931
by 100 you are losing its initial meaning – Ebullience