Why `0.4/2` equals to `0.2` meanwhile `0.6/3` equals to `0.19999999999999998` in python? [duplicate]
Asked Answered
Y

3

0

I know these are float point division. But why did these two formula behave differently?

And I did some more investigation, the result confusing me even more:

>>>0.9/3
0.3

>>>1.2/3
0.39999999999999997

>>>1.5/3
0.5

What's the logic here to decide whether the result is printed with one decimal place or more?

PS: I used python3.4 to do the experiment above.

Ygerne answered 9/3, 2015 at 4:28 Comment(3)
None of these results are integersDeanery
You might want to see this StackOverflow question: "Is floating point math broken?" - it basically covers some very similar floating point issues.Qr
The What's new in Python 3.1 docs (scroll to end of linked section, just before "New, Improved and Deprecated Modules") are a useful explanation for why/when Python 2.7/3.1+ have much shorter float reprs for some values. Straight from the horse's mouth, so to speak.Whereas
H
6

Because the exact values of the floating point results are slightly different.

>>> '%.56f' % 0.4
'0.40000000000000002220446049250313080847263336181640625000'
>>> '%.56f' % (0.4/2)
'0.20000000000000001110223024625156540423631668090820312500'
>>> '%.56f' % 0.6
'0.59999999999999997779553950749686919152736663818359375000'
>>> '%.56f' % (0.6/3)
'0.19999999999999998334665463062265189364552497863769531250'
>>> '%.56f' % 0.2
'0.20000000000000001110223024625156540423631668090820312500'
>>> (0.2 - 0.6/3) == 2.0**-55
True

As you can see, the result that is printed as "0.2" is indeed slightly closer to 0.2. I added the bit at the end to show you what the exact value of the difference between these two numbers is. (In case you're curious, the above representations are the exact values - adding any number of digits beyond this just adds more zeroes).

Halsted answered 9/3, 2015 at 4:37 Comment(0)
E
5

Check out the documentation on floating point numbers in python.

Most specifically:

Interestingly, there are many different decimal numbers that share the same nearest approximate binary fraction. For example, the numbers 0.1 and 0.10000000000000001 and 0.1000000000000000055511151231257827021181583404541015625 are all approximated by 3602879701896397 / 2 ** 55. Since all of these decimal values share the same approximation, any one of them could be displayed while still preserving the invariant eval(repr(x)) == x.

Historically, the Python prompt and built-in repr() function would choose the one with 17 significant digits, 0.10000000000000001. Starting with Python 3.1, Python (on most systems) is now able to choose the shortest of these and simply display 0.1.

Ecbatana answered 9/3, 2015 at 4:30 Comment(0)
F
2

Floating point numbers are implemented as binary64 according to IEEE 754 (as in virtually all programming languages).

This standard gives 52 bits to the "significand / fraction" (approximately 16 decimal digits of accuracy), 11 bits to the exponent and 1 bit to the sign (plus or minus):

IEEE 754 bits

In particular, a number like 0.4 can not be represented as

(1 + f) * 2**(exponent)

for some fraction in base 2 and an exponent that can be represented with 11 bits (-1022 through 1023).

Viewing 0.4 in hex for example:

>>> (0.4).hex()
'0x1.999999999999ap-2'

we see the best approximation in our set of numbers is

+ 2**(-2) * (1 + 0x999999999999a/ float(2**52))

Trying to represent this in base 2, we have

2**(-2) * (1 + 0.6)

but 0.6 = 9/15 = 1001_2/1111_2 written in base 2 has a repeating string of four binary digits

0.1001100011000110001...

so can never be represented using a finite number of binary digits.


A bit more in depth

So we can "unpack" 0.4

>>> import struct
>>> # 'd' for double, '>' for Big-endian (left-to-right bits)
>>> float_bytes = struct.pack('>d', 0.4)

as 8 bytes (1 byte is 8 bits)

>>> float_bytes
'?\xd9\x99\x99\x99\x99\x99\x9a'

or as 16 hex digits (1 hex digit is 4 bits, since 2**4 == 16)

>>> ''.join(['%2x' % (ord(digit),) for digit in float_bytes])
'3fd999999999999a'

or as all 64 bits in their glory

>>> float_bits = ''.join(['%08d' % (int(bin(ord(digit))[2:]),)
...                       for digit in float_bytes])
>>> float_bits
'0011111111011001100110011001100110011001100110011001100110011010'

From there, the first bit is the sign bit:

>>> sign = (-1)**int(float_bits[0], 2)
>>> sign
1

The next 11 bits are the exponent (but shifted by 1023, a convention of binary64):

>>> exponent = int(float_bits[1:1 + 11], 2) - 1023
>>> exponent
-2

The final 52 bits are the fractional part

>>> fraction = int(float_bits[1 + 11:1 + 11 + 52], 2)
>>> fraction
2702159776422298
>>> hex(fraction)
'0x999999999999a'

Putting it all together

>>> sign * 2**(exponent) * (1 + fraction / float(2**52))
0.4
Frustration answered 9/3, 2015 at 5:4 Comment(1)
surely you mean binary64Halsted

© 2022 - 2024 — McMap. All rights reserved.