Will (int)pow(n,m) be wrong for some positive integers n,m?
Asked Answered
G

4

5

Assuming n and m are positive integers, and nm is within the range of an integer, will (int)pow(n,m) ever give a wrong answer?

I have tried many n for m=2 and have not gotten any wrong answers so far.

Gaulin answered 2/9, 2015 at 12:33 Comment(5)
if m is negative you can easily get the wrong answer (e.g. 2^-2)Birk
@Birk Sorry, I meant positive integers.Gaulin
@Gaulin Mention that in question.Juristic
If int is a 64-bit integer, double may not be able to accurately represent INT_MAX. In this case (int)pow(INT_MAX, 1) != INT_MAX.Buffon
Yes, it will. E.g. when n = 2 and m > the number of significant digits in the mantissa.Waldo
D
8

The C standard does not impose any requirements on the accuracy of floating point arithmetic. The accuracy is implementation-defined which means that implementations are required to document it. However, implementations are left with a significant "out": (§5.2.4.2.2 paragraph 6, emphasis added.)

The accuracy of the floating-point operations (+, -, *, /) and of the library functions in <math.h> and <complex.h> that return floating-point results is implementation-defined, as is the accuracy of the conversion between floating-point internal representations and string representations performed by the library functions in <stdio.h>, <stdlib.h>, and <wchar.h>. The implementation may state that the accuracy is unknown.

And, indeed, gcc takes advantage of this by specifying that the accuracy is unknown. Nonetheless, the accuracy of glibc computations is pretty good, even if not guaranteed.

The MS libc implementation is known to occasionally produce an error of 1ULP for the pow function with integer arguments, resulting in an incorrect value if the result of the pow operation is simply truncated to an int. (I couldn't find any specification in the Visual Studio documentation about floating point accuracy, but I believe that the list of SO questions below provides evidence of my assertion.)

On x86 architectures, most implementations make some attempt to implement IEEE 754, since the native floating point representation conforms. However, until the 2008 revision, IEEE-754 only required correctly-rounded results from +, -, *, / and sqrt. Since the revision, it recommends that a number of other functions return correctly-rounded results, but all of these recommendations are optional, and few math libraries implement all of them.

If you really want to use pow to compute integer powers of integers, it is advisable (and easy) to use lround(pow(n, m)) instead of (long)(pow(n, m)), which will round the result to the nearest integer, rather than optimistically relying on the error being positive. That should give the correct integer value for results up to 252 (with IEEE-754 doubles) if pow is within 1ULP. Between 252 and 253, a 1ULP error would be 0.5, which will sometimes round to the wrong integer. Beyond 253 not all integers are representable as doubles.

SO is actually full of questions resulting from this particular problem. See:

and undoubtedly many more.

Dozer answered 2/9, 2015 at 15:15 Comment(1)
Sir , Can You See This Question #42165050 , It is about pow, but dont has good answer as of now.Acceleration
O
3

Some implementations evaluate pow(x,y) as exp(y*log(x)) in any case, and others use a square-multiply powering for integral exponents.

glibc for example has 'C' implementations, as well as platform-specific implementations with lookup tables, polynomial approximations, etc. Many Sun/BSD derivatives appear to handle integral exponents as a special case.

More importantly, IEEE-754 doesn't require it. Nor does it specify the required accuracy of the result. glibc documents its maximum ulp errors, though this page might not be up to date.

In summary, don't assume an exact integral result, even if pow(n,m) has an exact floating-point representation.

Oloroso answered 2/9, 2015 at 14:27 Comment(11)
I think the answer is more complicated than this. Even if pow is implemented via log and exp, those operations are required to "round correctly" by IEEE rules, see en.wikipedia.org/wiki/…. The question is, if those operations are rounded correctly, will the answer ever be off? Of course you can't assume every C implementation abides by IEEE rules, but most modern ones will.Zenithal
@MarkRansom: As that link points out, those operations are recommended, and conformance is optional. Anyway, "most modern C implementations" conform to IEEE-754-1985, as per Annex F of the current C standard; there is no reason to assume they will conform to optional features in IEEE-754-2008.Dozer
pow(x, y) is equivalent to exp(y*log(x)), not exp(x*log(y))Kalbli
Can you name a concrete implementation that is widespread and does so?Indulgent
@FUZxxl - it's right there, in the second link I provided.Oloroso
@BrettHale But the documentation claims “Given two IEEE double machine numbers y, x it computes the correctly rounded (to nearest) value of X^y.” So it does not yield wrong results for integer bases and exponents.Indulgent
@FUZxxl - I haven't tested the results. But my understanding is that 'correct rounding to nearest' means RN{x}, x within 1/2ulp of the exact result here.Oloroso
@BrettHale But if the integer is representable exactly as a floating point number (all 32 bit integers are representable as IEEE-754 double numbers), then the nearest number is the integer itself, rendering OPs assumption correct.Indulgent
Again, you have yet to show me a single case on a common implementation that yields a wrong result within the constraints specified by OP.Indulgent
@FUZxxl - The burden of proof isn't really on me. I'm going with IEEE-754 requirements. Your answer makes the claim of exact results, with no reference to any standard or implementation, let alone all implementations. The top answer has already pointed out that this is not the case with the MS implementation - assuming that's widespread enough for you.Oloroso
@BrettHale Well, so far every single implementation either of us cited has the property OP asked about. So why do you still refuse to believe this is actually the case? Clearly, neither of us was able to find a common implementation of the libc that does not exhibits OPs property. Of course, the standard does not guarantee this but it does not guarantee that malloc() ever succeeds either but almost all programs assume this.Indulgent
I
0

On platforms where an int has 64 bits and a double has also 64 bits, this may not work as a double (the arguments and result of pow()) doesn't have enough mantissa to represent each 64 bit integer exactly. On an ordinary platform where an int has 32 bits and a double is a 64 bit IEEE 754 floating point type, your assumption is true.

Indulgent answered 2/9, 2015 at 12:43 Comment(10)
"On an ordinary platform where an int has 32 bits and a double is a 64 bit IEEE 754 floating point type, your assumption is true." – no, not quite.Waldo
Is the ("most common", etc.) implementation of the calculation of pow guaranteed to always return calculated >= actual? Intermediate floating point rounding can result in a slightly smaller value than a full integer, and the (int) cast rounds down.Pixie
@TheParamagneticCroissant How so? Please tell me where it isn't.Indulgent
@FUZxxl Sorry, I was thinking of a different situation.Waldo
@Pixie The standard says “The pow functions compute x raised to the power y.” I read this as “the result must be the floating pointer number that is closest to the actual result”Indulgent
@FUZxxl: well of course; at least, that's the general idea. However, it seems to me current implementations use exp and a series of approximations. Would an ("any", "common", etc.) implementation check if the inputs are whole numbers and then put a limit on the number of convergence checks? (I immediately see a problem here ... how can you see if a double contains a whole number?)Pixie
@Pixie The accepted answer in the question you linked shows an implementation that specifically checks for integer base and exponent and is exact. It's from Apples (BSD's) libm.Indulgent
@FUZxxl: You are reading too much into that sentence from the standard. See 5.2.4.2.2/6 "The accuracy of the floating-point operations (+, -, *, /) and of the library functions in <math.h> and <complex.h> that return floating-point results is implementation-defined."Dozer
@Dozer Well okay, so don't use floating point math then. I believe though that if your implementation defines __STDC_IEC_559__ (most do), then pow has to be accurate but I need to check further.Indulgent
@FUZxxl: __STDC_IEC_559__ guarantees the features listed in Annex F, which include conformance with the 1985 edition of IEEE-754, not the 2008 version. In 1985, only +, -, *, / and sqrt are required to be correctly rounded, and even in 2008 the additional functions would be optional. Notwithstanding all of that, I do use floating point, a lot. But when I convert to an integer, I make sure to round rather than truncate.Dozer
C
0

It depends on the sizes of your integers and doubles.

With a 32 bit integer and a 64 bit IEEE 754 double, then it will give you a correct answer for all possible integers.

An IEEE 754 binary64 floating point number (which is the usual representation of doubles on most modern machines) can exactly represent all integers [0,2^53].

Cupid answered 2/9, 2015 at 12:45 Comment(4)
@TheParamagneticCroissant Show me one counterexample of positive n and m where this is not true. I can't think of any, but I'm open to being wrong.Cupid
sorry, I was thinking about a different situation. I still believe there are examples, for which I'm searching.Waldo
it depends on implementation. Some bad ones simply calculate pow(n, m) by exp(log(n)*m) and can't return the correct result for many integersKalbli
some examples: https://mcmap.net/q/162496/-return-value-of-pow-gets-rounded-down-if-assigned-to-an-integer/995714 https://mcmap.net/q/162872/-wrong-output-by-power-function-c-duplicate/995714Kalbli

© 2022 - 2024 — McMap. All rights reserved.