Handling money value, is it safe to divide a number by 100?
Asked Answered
E

2

17

In the repository code, in a module developed by another team, I discovered that there is a conversion of a price from cents to euro, just dividing the number by 100.

The code is in Javascript, so it uses the IEEE 754 standard.

I know that is not safe handling money values as floating-point numbers, but I was wondering if this case is safe before sending the task to the other team.

So far, I didn't find any case where dividing an integer by 100 gets an inaccurate result. Let's go further: 100 is just 2*2*5*5.

We know that dividing a number by 2 is safe, since it is just equal to a shift of a position.

So we can easily say that, if exists a number that is not accurately divisible by 5, then the division by 100 is not accurate.

I did many tests and I didn't find any of these numbers, but I'm far from a theoretical demonstration of the thesis.

So, is dividing a number by 100 safe in the IEEE 754 standard?

Ebullience answered 12/3, 2019 at 17:34 Comment(8)
no - imho this is not a question for math exchange, this is perfectly right hereFefeal
Does your application only ever deal in whole cents? Not fractional cents? (A lot of financial apps deal in fractional cents.)Backward
@T.J.Crowder I know it for experience :) but I still like hope on a comment of the downvoter and maybe learn from my mistakes - if there is someEbullience
At least related: #54904383 I could swear this is covered somewhere, but I'm not finding it (other than that question).Backward
In what format are you going to send the number to the other team? As JavaScript Number? As a decimal numeral with two digits after the decimal point? As a decimal numeral formatted with default formatting or some other formatting? How big can the numbers be (what is the maximum value you will support)?Hysterectomize
Re “I didn't find any case where dividing an integer by 100 gets an inaccurate result”: Dividing by 100 necessarily gets non-exact result in most cases, because no rational number whose divisor in simplest form includes 5 or 25 is representable in binary floating-point. For example, 1/100 produces 0.01000000000000000020816681711721685132943093776702880859375. Whether this difference matters depends on how the number will be handled.Hysterectomize
@EricPostpischil you are right, but not all the approximations are the same. Let's take the number 27, doing (27/100)*100 === 27 you get true, because the approximation is below epsilon. Doing (1931/100)*100 == 1931) you get false. So I suppose that dividing 1931 by 100 you are losing its initial meaningEbullience
Short answer: yes, it's safeApiculture
L
10

A floating point decimal number with 15 significant digits of precision converts to a 64-bit binary floating point number (Number in JavaScript) and back to decimal without loss of precision. Although the binary number may not store the decimal number exactly, it has more bits of precision (minimum 17 decimal significant digits are required to represent a 53-bit mantissa) and converts with rounding back to the original decimal exactly. These extra binary digits of mantissa are there precisely to keep those 15 significant decimal digits exact in all results of CPU arithmetic. See Number of Digits Required For Round-Trip Conversions for full details.

When you divide by 100 the binary result still has 53-bit of precision with a possible error in the unit of least precision (the lowest bit of mantissa) unless the result underflows to 0 (see What Every Computer Scientist Should Know About Floating-Point Arithmetic for full details.). That binary number still converts with rounding to a correct exact decimal number within 15 significant decimal digits of precision.

In other words, if your decimal numbers have no more that 15 significant digits then dividing them by 100 keeps that precision.

E.g. try 123456789012345 / 100 and 0.000123456789012345 / 100 in your browser console (both these numbers have 15 significant decimal digits of precision) - these divisions return correct decimal numbers within 15 significant decimal digits:

123456789012345 / 100
1234567890123.45

0.000123456789012345 / 100
0.00000123456789012345
Latarsha answered 12/3, 2019 at 21:56 Comment(0)
H
5

If x is a 15-decimal-digit integer, then converting x to a JavaScript Number, dividing by 100, and converting the result to a numeral with 15 significant decimal digits produces exactly x/100. A proof follows.

Notes:

  • Converting the result of the division to a number with 15 significant decimal digits yields exactly x/100. The actual result of the division, while it is in the Number format, generally will not be exactly x/100. For example, 73/100 yields 0.729999999999999982236431605997495353221893310546875.
  • Converting the result of the division to more than 15 significant decimal digits will also not generally yield x/100, as the extra digits may reveal the difference, as shown for .73 above. (And, of course, using fewer digits may be insufficient to represent x/100.) Thus, if it is desired to communicate exactly x/100 to another process, it must be done with exactly 15 significant decimal digits (or some other mitigation for error).
  • The proof below applies to 15-digit integers x, not to other 15-significant-decimal-digit numbers (such as numerals with 15 decimal digits followed by one or more zeros or numerals starting with a decimal point followed by some zeros followed by 15 significant digits).

Preliminaries

JavaScript is an implementation of ECMAScript, specified in Ecma-262 and ISO/IEC 16262. In clause 6.1.6, Ecma-262 specifies that the IEEE-754 basic 64-bit binary floating-point format is used for ECMAScript’s Number type, except that only a single NaN is used. Clause 6.1.6 further describes the arithmetic used, which is essentially IEEE-754 arithmetic with rounding-to-nearest, ties-to-even.

The IEEE-754 basic 64-bit binary floating-point format uses a 53-bit significand.

The Unit of Least Precision (ULP) of a binary floating-point number is the value attributed to the position of the least significant bit in its significand. (Thus, the ULP scales with the exponent.) Measured in ULP, all normal 53-bit significands are in [252 ULP, 253 ULP).

For a 15-significant-digit decimal number, its ULP herein will be the value attributed to the position of the 15th digit position counting down from the leading significant digit.

Lemma

First, we establish the well-known fact that converting a 15-significant-decimal-digit number to Number and back to 15 significant decimal digits yields the original number, provided the number is within the normal range of the Number format.

If x is a number of 15 significant decimal digits (not necessarily an integer) within the normal range of the floating-point format (2−1022 ≤ |x| < 21024), then converting x to the nearest value representable in the floating-point format and then converting the result to 15 significant decimal digits produces exactly x, when both conversions are performed with rounding-to-nearest, ties-to-even. To see this, let y be the result of the first conversion. If y differs from x by less than ½ of ULP of x, then x is the 15-significant-digit number nearest y and hence must be the result of the second conversion.

In the first conversion, the result y is at most ½ ULP from x, due to the rounding rule. This is a relative accuracy of at most ½ / 252 (that is, the potential ½ ULP error divided by the least the significand can be, measured in ULP). Thus, y differs from x by at most one part in 253. In the worst case, the digits of x may be 9999999999999 = 1015−1, so the error relative to ULP of x would be (1015−1)/253, which is about .111 times the ULP of x. Thus, y always differs from x by less than ½ of its ULP, so converting y back to 15 significant decimal digits yields x.

Proof

If x is a 15-decimal-digit integer, it is exactly representable in the Number format, since the Number format has 53 bits in its significand and is therefore capable of exactly representing all integers up to 253, which is about 9.007e15, which is more than 1015.

Thus, converting x to Number yields exactly x with no error.

Then, by the rules for rounding arithmetic results, dividing x by 100 yields the representable number closest to x/100. Call this y. Now note that x/100 is a number representable with 15 significant decimal digits. (It could be written in scientific notation as x•10−2 or in source code as the digits of x suffixed by e-2.) Note that converting x/100 to Number also yields y, since the conversion yields, just as the division does, the number exactly representable in the Number format that is closest to x/100. By the lemma, the result of converting x/100 to Number and back to a 15-significant-decimal-digit number yields x/100, and so the result of converting x to Number, then dividing by 100, then converting to 15 significant decimal digits also yields x/100.

Hysterectomize answered 13/3, 2019 at 20:9 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.