Assuming n
and m
are positive integers, and nm is within the range of an integer, will (int)pow(n,m)
ever give a wrong answer?
I have tried many n
for m=2
and have not gotten any wrong answers so far.
Assuming n
and m
are positive integers, and nm is within the range of an integer, will (int)pow(n,m)
ever give a wrong answer?
I have tried many n
for m=2
and have not gotten any wrong answers so far.
The C standard does not impose any requirements on the accuracy of floating point arithmetic. The accuracy is implementation-defined which means that implementations are required to document it. However, implementations are left with a significant "out": (§5.2.4.2.2 paragraph 6, emphasis added.)
The accuracy of the floating-point operations (
+
,-
,*
,/
) and of the library functions in<math.h>
and<complex.h>
that return floating-point results is implementation-defined, as is the accuracy of the conversion between floating-point internal representations and string representations performed by the library functions in<stdio.h>
,<stdlib.h>
, and<wchar.h>
. The implementation may state that the accuracy is unknown.
And, indeed, gcc takes advantage of this by specifying that the accuracy is unknown. Nonetheless, the accuracy of glibc computations is pretty good, even if not guaranteed.
The MS libc implementation is known to occasionally produce an error of 1ULP for the pow
function with integer arguments, resulting in an incorrect value if the result of the pow
operation is simply truncated to an int
. (I couldn't find any specification in the Visual Studio documentation about floating point accuracy, but I believe that the list of SO questions below provides evidence of my assertion.)
On x86 architectures, most implementations make some attempt to implement IEEE 754, since the native floating point representation conforms. However, until the 2008 revision, IEEE-754 only required correctly-rounded results from +
, -
, *
, /
and sqrt
. Since the revision, it recommends that a number of other functions return correctly-rounded results, but all of these recommendations are optional, and few math libraries implement all of them.
If you really want to use pow
to compute integer powers of integers, it is advisable (and easy) to use lround(pow(n, m))
instead of (long)(pow(n, m))
, which will round the result to the nearest integer, rather than optimistically relying on the error being positive. That should give the correct integer value for results up to 252 (with IEEE-754 doubles) if pow
is within 1ULP. Between 252 and 253, a 1ULP error would be 0.5, which will sometimes round to the wrong integer. Beyond 253 not all integers are representable as doubles.
SO is actually full of questions resulting from this particular problem. See:
and undoubtedly many more.
Some implementations evaluate pow(x,y)
as exp(y*log(x))
in any case, and others use a square-multiply powering for integral exponents.
glibc for example has 'C' implementations, as well as platform-specific implementations with lookup tables, polynomial approximations, etc. Many Sun/BSD derivatives appear to handle integral exponents as a special case.
More importantly, IEEE-754 doesn't require it. Nor does it specify the required accuracy of the result. glibc documents its maximum ulp errors, though this page might not be up to date.
In summary, don't assume an exact integral result, even if pow(n,m)
has an exact floating-point representation.
pow
is implemented via log
and exp
, those operations are required to "round correctly" by IEEE rules, see en.wikipedia.org/wiki/…. The question is, if those operations are rounded correctly, will the answer ever be off? Of course you can't assume every C implementation abides by IEEE rules, but most modern ones will. –
Zenithal pow(x, y)
is equivalent to exp(y*log(x))
, not exp(x*log(y))
–
Kalbli double
numbers), then the nearest number is the integer itself, rendering OPs assumption correct. –
Indulgent malloc()
ever succeeds either but almost all programs assume this. –
Indulgent On platforms where an int
has 64 bits and a double
has also 64 bits, this may not work as a double
(the arguments and result of pow()
) doesn't have enough mantissa to represent each 64 bit integer exactly. On an ordinary platform where an int
has 32 bits and a double
is a 64 bit IEEE 754 floating point type, your assumption is true.
pow
guaranteed to always return calculated >= actual
? Intermediate floating point rounding can result in a slightly smaller value than a full integer, and the (int)
cast rounds down. –
Pixie exp
and a series of approximations. Would an ("any", "common", etc.) implementation check if the inputs are whole numbers and then put a limit on the number of convergence checks? (I immediately see a problem here ... how can you see if a double
contains a whole number
?) –
Pixie +
, -
, *
, /
) and of the library functions in <math.h>
and <complex.h>
that return floating-point results is implementation-defined." –
Dozer __STDC_IEC_559__
(most do), then pow
has to be accurate but I need to check further. –
Indulgent __STDC_IEC_559__
guarantees the features listed in Annex F, which include conformance with the 1985 edition of IEEE-754, not the 2008 version. In 1985, only +, -, *, / and sqrt are required to be correctly rounded, and even in 2008 the additional functions would be optional. Notwithstanding all of that, I do use floating point, a lot. But when I convert to an integer, I make sure to round rather than truncate. –
Dozer It depends on the sizes of your integers and doubles.
With a 32 bit integer and a 64 bit IEEE 754 double, then it will give you a correct answer for all possible integers.
An IEEE 754 binary64 floating point number (which is the usual representation of doubles on most modern machines) can exactly represent all integers [0,2^53].
exp(log(n)*m)
and can't return the correct result for many integers –
Kalbli © 2022 - 2024 — McMap. All rights reserved.
int
is a 64-bit integer,double
may not be able to accurately representINT_MAX
. In this case(int)pow(INT_MAX, 1) != INT_MAX
. – Buffon