No, this is not another "Why is (1/3.0)*3 != 1" question.
I've been reading about floating-points a lot lately; specifically, how the same calculation might give different results on different architectures or optimization settings.
This is a problem for video games which store replays, or are peer-to-peer networked (as opposed to server-client), which rely on all clients generating exactly the same results every time they run the program - a small discrepancy in one floating-point calculation can lead to a drastically different game-state on different machines (or even on the same machine!)
This happens even amongst processors that "follow" IEEE-754, primarily because some processors (namely x86) use double extended precision. That is, they use 80-bit registers to do all the calculations, then truncate to 64- or 32-bits, leading to different rounding results than machines which use 64- or 32- bits for the calculations.
I've seen several solutions to this problem online, but all for C++, not C#:
- Disable double extended-precision mode (so that all
double
calculations use IEEE-754 64-bits) using_controlfp_s
(Windows),_FPU_SETCW
(Linux?), orfpsetprec
(BSD). - Always run the same compiler with the same optimization settings, and require all users to have the same CPU architecture (no cross-platform play). Because my "compiler" is actually the JIT, which may optimize differently every time the program is run, I don't think this is possible.
- Use fixed-point arithmetic, and avoid
float
anddouble
altogether.decimal
would work for this purpose, but would be much slower, and none of theSystem.Math
library functions support it.
So, is this even a problem in C#? What if I only intend to support Windows (not Mono)?
If it is, is there any way to force my program to run at normal double-precision?
If not, are there any libraries that would help keep floating-point calculations consistent?
decimal
essentially a type of floating-point emulation? – Ssstrictfp
keyword, which forces all calculations to be done in the stated size (float
ordouble
) rather than an extended size. However, Java still has many problems with IEE-754 support. Very (very, very) few programming languages support IEE-754 well. – Salorsqrtps
), but the trick is getting the same or similar sources to always compile the same. It's problematic even in C/C++ if you allow different compilers. Related: Does any floating point-intensive code produce bit-exact results in any x86-based architecture?. But at least with an ahead-of-time compiled language, you can usually get determinism if you avoid any FP library functions that are allowed to be different on different platforms. – Eventful