Why this same code produce two different fp results on different Machines?
Asked Answered
P

1

4

Here's the code:

#include <iostream>
#include <math.h>

const double ln2per12 = log(2.0) / 12.0;

int main() {
    std::cout.precision(100);
    double target = 9.800000000000000710542735760100185871124267578125;
    double unnormalizatedValue = 9.79999999999063220457173883914947509765625;
    double ln2per12edValue = unnormalizatedValue * ln2per12;
    double errorLn2per12 = fabs(target - ln2per12edValue / ln2per12);
    std::cout << unnormalizatedValue << std::endl;
    std::cout << ln2per12 << std::endl;
    std::cout << errorLn2per12 << " <<<<< its different" << std::endl;
}

If I try on my machine (MSVC), or here (GCC):

errorLn2per12 = 9.3702823278363212011754512786865234375e-12

Instead, here (GCC):

errorLn2per12 = 9.368505970996920950710773468017578125e-12

which is different. Its due to Machine Epsilon? Or Compiler precision flags? Or a different IEEE evaluation?

What's the cause here for this drift? The problem seems in fabs() function (since the other values seems the same).

Peadar answered 14/2, 2019 at 15:50 Comment(14)
All sorts of things could be going on. Some machines have more precision then double actually uses and you'll get less compound error with the results saying in those extended registers then you would if each operation gets truncated.Excerpt
Possible duplicate of Is floating point math broken?Appalling
-Ofast turns on -ffast-math. Which ...can result in incorrect output for programs that depend on an exact implementation of IEEE or ISO rules/specifications for math functions.Periodicity
"Or Compiler precision flags?" - Impossible to answer when you don't tell us what flags you pass to your compiler. But, yes, some flags, like -Ofast for GCC allows the compiler to break the rules and can lead to weird results (don't use it) - the Microsoft compiler has similar dangerous flags.Hearst
@MatthieuBrucher: please NO! This is not a duplicate of that question! Its specific per platform. Please read carefullyPeadar
According to the Wikipedia IEEE754 article , double precision values have 15.95 decimal digits. Anything beyond that will lose precision.Greenwood
@JesperJuhl: you have links, you can check the flagsPeadar
not a duplicate at all of that questionGrout
Obviously not a dupAzrael
first off I think the expectation is broken of the results. not the code/compiler itself. but start by examining the disassembly to see that both/all start with the same conversion to binary of those constants...Grout
And yes, replacing -Ofast by -O2 on coliru turns the 9.368... into the 9.370...Azrael
@Peadar Links can go stale, making the question worthless if those links contained crucial information. Try to make the question self-contained.Greenwood
@Peadar The flags should be in the question, not behind external links. As far as I'm concerned, if I have to follow a link, it doesn't exist as part of the question.Hearst
related: https://mcmap.net/q/655034/-are-there-any-drawbacks-to-using-o3-in-gccAzrael
J
6

Even without -Ofast, the C++ standard does not require implementations to be exact with log (or sin, or exp, etc.), only that they be within a few ulp (i.e. there may be some inaccuracies in the last binary places). This allows faster hardware (or software) approximations, which each platform/compiler may do differently.

(The only floating point math function that you will always get perfect results from on all platforms is sqrt.)

More annoyingly, you may even get different results between compilation (the compiler may use some internal library to be as precise as float/double allows for constant expressions) and runtime (e.g. hardware-supported approximations).

If you want log to give the exact same result across platforms and compilers, you will have to implement it yourself using only +, -, *, / and sqrt (or find a library with this guarantee). And avoid a whole host of pitfalls along the way.

If you need floating point determinism in general, I strongly recommend reading this article to understand how big of a problem you have ahead of you: https://randomascii.wordpress.com/2013/07/16/floating-point-determinism/

Janssen answered 14/2, 2019 at 16:3 Comment(10)
There's also some obscure flag to disallow the compiler to perform floating point operation on the hardware natural float size (generally 80 bits). I've read something somewhere.Azrael
This is also worth a read: What Every Computer Scientist Should Know About Floating-Point ArithmeticHearst
@Azrael See en.cppreference.com/w/cpp/types/climits/FLT_EVAL_METHOD.Janssen
@MaxLanghof: the real "deal" is that if I remove the ln2per12 as const, the results are the same. It seems that const do somethings different... coliru.stacked-crooked.com/a/d246fa05447efe19Peadar
@Peadar See the paragraph about differences between compilation and run time. (Also see the linked article under "Transcendentals".) With const, the compiler can constant-fold the expression and replace it with a precise value at compile-time. Without, it has to initialize it at runtime, which may use a completely different method for calculating log.Janssen
@ Max Langhof: if that's true, I would expect to see different outputs for that value when I cout it. Instead, its the same. Try this: coliru.stacked-crooked.com/a/c113f402cf81e667 . It hasn't log at all, b is printed the same (with/without const), but c / b lead to different results with/without set const on bPeadar
@Peadar Looking at the assembly with and without const, the compiler constant-folds everything in both cases, no calculation happens at runtime. The difference is surprising, I agree, but fully within the range of conceivable compiler optimizations. For the record, if you use -O3 instead, you get the same result in both cases. If you want to know where precisely the difference comes from you'll have to dig through gcc optimization internals, but you have to live with it regardless.Janssen
Its not clear at all why this is happens. Since its not related with Log anymore, I think I would delete this question and open a "more one" with a precise target. Can you remove my question? Since you have replied, I can't...Peadar
What's the point of removing the question? Just ask a new one if you want to, but I don't think you'll get a good answer to "why does gcc optimize this in different ways". If you're lucky someone will guide you through the optimization sources, but what would you gain from it?Janssen
Let see what's happening!Peadar

© 2022 - 2024 — McMap. All rights reserved.