What does standard say about cmath functions like std::pow, std::log etc?
Asked Answered
M

1

6

Does the standard guarantee that functions return the exact same result across all implementations?

Take for example pow(float,float) for 32bit IEEE floats. Is the result across all implementations identical if the same two floats are passed in?

Or is there some flexibility that the standard allows with regard to tiny differences depending on the algorithm used to implement pow?

Mom answered 18/10, 2015 at 18:17 Comment(4)
The std does not mandate float and double to be IEEE-754 types. They could be machine-specific types, with different ranges and precision.Eusebioeusebius
If you have two machines that use the same representation for floats and doubles, and with the same settings regarding rounding, I don't know the answer. In all other cases, the answer is no, they can give different results.Eusebioeusebius
also two different compilers on same machine that obeys ieee754. If std says yes answer should be same then it would have to define what correct answer is.Mom
Related: Does any floating point-intensive code produce bit-exact results in any x86-based architecture?: Summary: no in C, but yes in asm.Electrotechnics
P
18

No, the C++ standard doesn't require the results of cmath functions to be the same across all implementations. For starters, you may not get IEEE-754/IEC 60559 floating point arithmetic.

That said, if an implementation does use IEC 60559 and defines __STDC_IEC_559__, then it must adhere to Annex F of the C standard (yes, your question is about C++, but the C++ standard defers to the C standard for C headers like math.h). Annex F states:

  • The float type matches the IEC 60559 single format.
  • The double type matches the IEC 60559 double format.
  • The long double type matches an IEC 60559 extended format, else a non-IEC 60559 extended format, else the IEC 60559 double format.

Further, it says normal arithmetic must follow the IEC 60559 standard:

  • The +, , *, and / operators provide the IEC 60559 add, subtract, multiply, and divide operations.

It further requires sqrt to follow IEC 60559:

  • The sqrt functions in <math.h> provide the IEC 60559 square root operation.

It then goes on to describe the behavior of several other floating-point functions, most of which you probably aren't interested in for this question.

Finally, it gets to the math.h header, and specifies how the various math functions (i.e. sin, cos, atan2, exp, etc.) should handle special cases (i.e. asin(±0) returns ±0, atanh(x) returns a NaN and raises the "invalid" floating-point exception for |x| > 1, etc.). But it never nails down the exact computation for normal inputs, which means you can't rely on all implementations producing the exact same computation.

So no, it doesn't require these functions to behave the same across all implementations, even if the implementations all define __STDC_IEC_559__.


This is all from a theoretical perspective. In practice, things are even worse. CPUs generally implement IEC 60559 arithmetic, but that can have different modes for rounding (so results will differ from computer to computer), and the compiler (depending on optimization flags) might make some assumptions that aren't strictly standards conforming in regards to your floating point arithmetic.

So in practice, it's even less strict than it is in theory, and you're very likely to see two computers produce slightly different results at some point or another.


A real world example of this is in glibc, the GNU C library implementation. They have a table of known error limits for their math functions across different CPUs. If all C math functions were bit-exact, those tables would all show 0 error ULPs. But they don't. The tables show there is indeed varying amounts of error in their C math functions. I think this sentence is the most interesting summary:

Except for certain functions such as sqrt, fma and rint whose results are fully specified by reference to corresponding IEEE 754 floating-point operations, and conversions between strings and floating point, the GNU C Library does not aim for correctly rounded results for functions in the math library[...]

The only things that are bit-exact in glibc are the things that are required to be bit-exact by Annex F of the C standard. And as you can see in their table, most things aren't.

Perceptual answered 18/10, 2015 at 18:35 Comment(9)
Thanks for detailed answer. So to guarantee same result one would need to write their own pow function.Mom
Pretty much. In fact, for situations where bit-exactness is important (like in audio and video codecs), it's not uncommon to implement your own fixed-point arithmetic.Perceptual
What do you mean by normal inputs in your sentence "But it never nails down the exact computation for normal inputs"?Rattail
@Zboson: I mean inputs that shouldn't produce any error or need any special handling. For example, for the function acos, 0.5 would be considered a normal input because arccosine is well defined for that input value. But the input 1 needs special handling, because it results in the value zero and IEC 60559 has two zeros: +0 and -0. Annex F states acos(1) should yield +0, not -0. But Annex F doesn't talk about acos(0.5) because that "normal" input value doesn't need any special handling. Does that clear things up?Perceptual
@Cornstalks, yes, that's what I thought you meant. But I thought the only exceptions were for the non-normal inputs/outputs such as min(0,-0) or minmax(x,-x). I'm surprised that normal inputs are not guaranteed to be the same. You mean that I can't even rely on acos(0.5) to giving the same result with different compilers when __STDC_IEC_559__ is defined?Rattail
@Zboson: Indeed, you cannot rely on acos(0.5) being the exact same value with different compilers. Heck, you can't even rely on it being the same with the same compiler on different architectures. glibc has a nice table listing the ULP error of their math functions, and as you can see, these math functions are not bit-exact across all architectures.Perceptual
@Cornstalks: I was not aware of that. Thank you for this information.Rattail
Related: Does any floating point-intensive code produce bit-exact results in any x86-based architecture?: Summary: no in C, but yes in asm. So it can be possible to distribute a binary that produces bit-identical results on every machine, but even that's hard because of dynamically linked libm.Electrotechnics
Also, it would be good to mention that + - * / and sqrt are required by the IEEE standard to produce correctly-rounded results, so there is no room for error there. It's only C compilers that have the freedom to break things according to FLT_EVAL_METHOD (precision of temporaries) and by contracting a*b+c into an FMA. See my answer on the linked question for more details.Electrotechnics

© 2022 - 2024 — McMap. All rights reserved.