No, the C++ standard doesn't require the results of cmath functions to be the same across all implementations. For starters, you may not get IEEE-754/IEC 60559 floating point arithmetic.
That said, if an implementation does use IEC 60559 and defines __STDC_IEC_559__
, then it must adhere to Annex F of the C standard (yes, your question is about C++, but the C++ standard defers to the C standard for C headers like math.h
). Annex F states:
- The
float
type matches the IEC 60559 single format.
- The
double
type matches the IEC 60559 double format.
- The
long double
type matches an IEC 60559 extended format, else a
non-IEC 60559 extended format, else the IEC 60559 double
format.
Further, it says normal arithmetic must follow the IEC 60559 standard:
- The
+
, −
, *
, and /
operators provide the IEC 60559 add, subtract, multiply, and divide operations.
It further requires sqrt
to follow IEC 60559:
- The
sqrt
functions in <math.h>
provide the IEC 60559 square root operation.
It then goes on to describe the behavior of several other floating-point functions, most of which you probably aren't interested in for this question.
Finally, it gets to the math.h
header, and specifies how the various math functions (i.e. sin
, cos
, atan2
, exp
, etc.) should handle special cases (i.e. asin(±0)
returns ±0
, atanh(x)
returns a NaN and raises the "invalid" floating-point exception for |x| > 1, etc.). But it never nails down the exact computation for normal inputs, which means you can't rely on all implementations producing the exact same computation.
So no, it doesn't require these functions to behave the same across all implementations, even if the implementations all define __STDC_IEC_559__
.
This is all from a theoretical perspective. In practice, things are even worse. CPUs generally implement IEC 60559 arithmetic, but that can have different modes for rounding (so results will differ from computer to computer), and the compiler (depending on optimization flags) might make some assumptions that aren't strictly standards conforming in regards to your floating point arithmetic.
So in practice, it's even less strict than it is in theory, and you're very likely to see two computers produce slightly different results at some point or another.
A real world example of this is in glibc, the GNU C library implementation. They have a table of known error limits for their math functions across different CPUs. If all C math functions were bit-exact, those tables would all show 0 error ULPs. But they don't. The tables show there is indeed varying amounts of error in their C math functions. I think this sentence is the most interesting summary:
Except for certain functions such as sqrt
, fma
and rint
whose results are fully specified by reference to corresponding IEEE 754 floating-point operations, and conversions between strings and floating point, the GNU C Library does not aim for correctly rounded results for functions in the math library[...]
The only things that are bit-exact in glibc are the things that are required to be bit-exact by Annex F of the C standard. And as you can see in their table, most things aren't.
float
anddouble
to be IEEE-754 types. They could be machine-specific types, with different ranges and precision. – Eusebioeusebiusfloat
s anddouble
s, and with the same settings regarding rounding, I don't know the answer. In all other cases, the answer is no, they can give different results. – Eusebioeusebius