Why compile-time floating point calculations might not have the same results as run-time calculations?
Asked Answered
I

3

7

In constexpr: Introduction, the speaker mentioned "Compile-time floating point calculations might not have the same results as runtime calculations": enter image description here

And the reason is related to "cross-compiling".

Honestly, I can't get the idea clearly. IMHO, different platforms may also have different implementation of integers.

Why does it only affect floating points? Or I miss something?

Incursion answered 21/6, 2018 at 1:10 Comment(9)
All integers are precise. The majority of floats are not. therefore implementation of floats matters at runtime on different platforms.Varicose
So it is not related to the size/endianess of integer/floating-point of different platforms?Incursion
@NanXiao The size and bounds are specified by the implementation. Given those, it doesn't matter what the endianness or anything else is; you are guaranteed to get the same result for any well-defined computation, whether it's at runtime or compile time. With floats, a lot more than just size and bounds are needed to get a full specification.Trakas
@DanielH Thanks for your comment! So it seems this is not related to "cross-compiling" too much, but about floating-point implementation. Correct? Thanks!Incursion
@NanXiao Not entirely about cross-compiling, but as a practical matter if your compiler is running on the same system as the target binary then there's at least a good chance it'll use that system's floating-point processing capabilities the same way. I wouldn't be too surprised if a non-cross-compiling x64 sample were available for some major compiler, though.Trakas
The reason, I believe, is because constexpr objects can be treated symbolically, meaning there will be no rounding errors, ie 1e100 + 1 - 1e100 will actually be 1 as a constexpr.Bindweed
@PasserBy I'm fairly sure that isn't at all guaranteed, but it should be allowed.Trakas
@PasserBy "treated symbolically" That sounds extremely vicious to me...Gliwice
@NanXiao "the size/endianess of integer/floating-point" ...matters if you try to read a numeric type as an array of bytes, or an integer as a float (type punning). Do you do that? with constexpr?Gliwice
C
2

Why does it only affect floating points?

For the standard doesn't impose restrictions on floating-point operation accuracy.

As per expr.const, emphasis mine:

[ Note: Since this document imposes no restrictions on the accuracy of floating-point operations, it is unspecified whether the evaluation of a floating-point expression during translation yields the same result as the evaluation of the same expression (or the same operations on the same values) during program execution. [ Example:

bool f() {
    char array[1 + int(1 + 0.2 - 0.1 - 0.1)];  // Must be evaluated during translation
    int size = 1 + int(1 + 0.2 - 0.1 - 0.1);   // May be evaluated at runtime
    return sizeof(array) == size;
}

It is unspecified whether the value of f() will be true or false. — end example ]
— end note ]

Contrariwise answered 21/6, 2018 at 1:16 Comment(1)
As I understand, a common difference is in the evaluation of transcendental functions. Compilers may use a library like MPFR that provides correctly-rounded transcendental functions at compile time, while linking the executable to a performance-optimized library with slightly larger errors for transcendental functions to be used at run time.Chatterjee
T
2

You're absolutely right that, at some level, the problem of calculating floating-point values at compile time is the same as that of calculating integer values. The difference is in the complexity of the task. It's fairly easy to emulate 24-bit integer math on a system that has 16-bit registers; for serious programmers, that's a finger exercise. It's much harder to do floating-point math if you don't have a native implementation. The decision to not require floating-point constexpr is based in part on that difference: it would be really expensive to require cross-compilers to emulate floating-point math for their target platform at compile time.

Another factor in this is that some details of floating-point calculations can be set at runtime. Rounding is one; handling of overflows and underflows is another. There's simply no way that a compiler can know the full context for the runtime evaluation of a floating-point calculation, so calculating the result at compile-time can't be done reliably.

Thanhthank answered 21/6, 2018 at 12:51 Comment(2)
Who wants to do something special on "underflow"?Gliwice
@Gliwice — never underestimate the ingenuity of serious floating-point folks! The issue on underflow is sudden versus gradual. When a result is too small to represent with full precision, it can either be treated as zero or it can be represented with less than full precision. The latter used to be called “denormals”; now they’re “subnormals”.Thanhthank
G
1

Why does it only affect floating points?

Some operations on integers are invalid and undefined:

  • divide by zero: mathematical operation not defined for the operand
  • overflow: mathematical value not representable for the given type

[The compiler will detect such cases on compile time values. At runtime, the behavior is not defined by the standard and can be anything, from throwing a signal, modulo operation, or "random" behavior if the compiler assumed that operations are valid.]

Operations on integers that are valid are completely specified mathematically.

Division of integers in C/C++ (and most programming languages) is an exact operation as it's Euclidean division, not an operation trying to find a close approximation of division of rationals: 5/3 is 1, infinite decimal representation of 5/3 is 1.66... or approximately 1.66666667; closest integer is 2.

The aim of fp is to provide the best approximation of the mathematical operation on "real numbers" (actually rational numbers for the four operations, floats are rational by definition). These operations are rounded according to the curent rounding mode, set with std::fesetround. So the fp operations are state dependent, the result isn't a function of only the operands. (See std::fegetround, std::fesetround.)

There is no such "state" at compile time, so compile time fp operations cannot be consistent with run time operations, by definition.

Gliwice answered 21/6, 2018 at 14:45 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.