Why is a**2 != a * a for some floats?
Asked Answered
K

2

13
$ python --version
Python 2.7.15

$ type test.py
import random

while True:
    a = random.uniform(0, 1)
    b = a ** 2
    c = a * a
    if b != c:
        print "a = {}".format(a)
        print "a ** 2 = {}".format(b)
        print "a * a = {}".format(c)
        break

$ python test.py
a = 0.145376687586
a ** 2 = 0.0211343812936
a * a = 0.0211343812936

I was only able to reproduce this on a Windows build of Python - to be precise: Python 2.7.15 (v2.7.15:ca079a3ea3, Apr 30 2018, 16:30:26) [MSC v.1500 64 bit (AMD64)] on win32. On my Arch Linux box installation of Python (Python 2.7.15 (default, May 1 2018, 20:16:04) [GCC 7.3.1 20180406] on linux2) the loop does not seem to terminate indicating that the a**2 = a * a invariant holds there.

What is going on here? I know that IEEE floats come with a plethora of misconceptions and idiosyncrasies (this, for example, does not answer my question), but I fail to see what part of the specification or what kind of implementation of ** could possibly allow for this.

To address the duplicate flagging: This is most likely not directly an IEEE floating point math problem and more of a implementation issue of the ** operator. Therefore, this is not a duplicate of questions which are only asking about floating point issues such as precision or associativity.

Kei answered 1/6, 2018 at 19:57 Comment(7)
Use repr to show more digits so we can see exactly what value it's failing on and how.Coquille
… or, since you're already using format, use {.30} or something.Fusty
This is not a duplicate of "Is floating-point math broken". The OP is asking why a ** 2 is not implemented as a * a.Stylite
Meanwhile, where in the spec do you see that x ** n should be equivalent to repeated multiplication if n happens to be an integer, much less implemented that way?Fusty
@EricPostpischil: Exponentiation isn't an operation IEEE 754 requires to be correctly rounded. Relying on a**2 == a*a is just going to lead to pain.Coquille
@user2357112: Regardless of whether IEEE 754 requires exponentiation be correctly rounded (it does recommend it), the nature of floating-point is not the cause of the problem OP observes. Floating-point operations are perfectly capable of supporting the identity a**2 == a*a. Therefore, if it fails to hold in some implementation, there is a cause other than the nature of floating-point. And I have not suggested OP or anybody else rely on this identity. I have merely stated the cause of the failure is not due to the nature of floating-point arithmetic.Freezedry
Welcome to floating point, where nothing is precise. Seriously. This sort of stuff comes up all the time. We don't need a new question on every application of it. It should surprise us more when anything in floating works precisely the way math suggests.Wyndham
F
15

Python relies on the underlying platform for its floating-point arithmetic. I hypothesize that Python’s ** operator uses a pow implementation (as used in C) (confirmed by user2357112 referring to Python 2.7.15 source code).

Generally, pow is implemented by using (approximations of) logarithms and exponentials, in part. This is necessary since pow supports non-integer arguments. (Of course, this general implementation does not preclude specializations for subsets of its domain.)

Microsoft’s pow implementation is notoriously not good. Hence, for pow(a, 2), it may be returning a result not equal to a*a.

Freezedry answered 1/6, 2018 at 20:8 Comment(4)
You can see the use of C pow in float.__pow__ here.Coquille
That float_pow implementation is of course only for CPython—but if you look into Jython, IronPython, or PyPy, you'll see that they use similar functions in Java, .NET, and RPython.Fusty
I have a hunch that small integer powers are special cased at least on linux, indeed, of a*a, a**2, math.exp(2*math.log(a)) the first two seem to be equal all the time, while the third often deviates.Wilkison
@PaulPanzer: That's not necessarily an indication that integer powers are special cased. On a platform that had perfectly correctly-rounded power, exp and log, you'd expect to see exactly the same results: math.exp(2 * math.log(a)) will often deviate thanks to the extra rounding error of the intermediate math.log(a) value. That error is then potentially magnified by the exp call (depending on the value of a). A good implementation of pow should be doing something more sophisticated than exp(y*log(x)).Phonsa
A
9

a ** 2 uses a floating point power function (like the one you can find in the standard C math lib), which is able to elevate any number to any power.

a * a is just multiplying once, it's more suitable for this case, and not liable to precision errors, (even more true for integers), like a ** 2 would be.

For floating point a, if you want to elevate to a power of, say, 5, by using

a * a * a * a * a

you'd be better off with a**5 because repeated multiplication is now liable to floating point accumulation error, and it's much slower.

a ** b is more interesting when b is large, because it's more efficient, for instance. But the precision may differ because it uses a floating point algorithm.

Also answered 1/6, 2018 at 20:10 Comment(5)
float.__pow__ does not use the math.pow function, at least not in CPython. They do, however, both ultimately use the same pow function from the C stdlib (after a bunch of pre-checks and conversions that are not identical for the two), which is probably what you should be saying here.Fusty
yep, slightly inaccurate.Jenine
Even a * a can have precision errors if the original uses enough digits. 2.5 * 2.5 would be fine but 2.123456789 * 2.123456789 will have some precision loss.Triumphant
@LorenPechtel 2.5*2.5 only works because it's exactly representable in binary with only a few bits. Try 2.2*2.2 (4.840000000000001).Wyndham
@Wyndham Yes. I deliberately picked a value with a short representation in floating point so there would be no precision loss.Triumphant

© 2022 - 2024 — McMap. All rights reserved.