Here's the code:
#include <iostream>
#include <math.h>
const double ln2per12 = log(2.0) / 12.0;
int main() {
std::cout.precision(100);
double target = 9.800000000000000710542735760100185871124267578125;
double unnormalizatedValue = 9.79999999999063220457173883914947509765625;
double ln2per12edValue = unnormalizatedValue * ln2per12;
double errorLn2per12 = fabs(target - ln2per12edValue / ln2per12);
std::cout << unnormalizatedValue << std::endl;
std::cout << ln2per12 << std::endl;
std::cout << errorLn2per12 << " <<<<< its different" << std::endl;
}
If I try on my machine (MSVC
), or here (GCC
):
errorLn2per12 = 9.3702823278363212011754512786865234375e-12
Instead, here (GCC
):
errorLn2per12 = 9.368505970996920950710773468017578125e-12
which is different. Its due to Machine Epsilon? Or Compiler precision flags? Or a different IEEE
evaluation?
What's the cause here for this drift? The problem seems in fabs()
function (since the other values seems the same).
double
actually uses and you'll get less compound error with the results saying in those extended registers then you would if each operation gets truncated. – Excerpt-Ofast
turns on-ffast-math
. Which ...can result in incorrect output for programs that depend on an exact implementation of IEEE or ISO rules/specifications for math functions. – Periodicity-Ofast
for GCC allows the compiler to break the rules and can lead to weird results (don't use it) - the Microsoft compiler has similar dangerous flags. – Hearst-Ofast
by-O2
on coliru turns the 9.368... into the 9.370... – Azrael