I'm optimizing a sorting function for a numerics/statistics library based on the assumption that, after filtering out any NaNs and doing a little bit twiddling, floats can be compared as 32-bit ints without changing the result and doubles can be compared as 64-bit ints.
This seems to speed up sorting these arrays by somewhere on the order of 40%, and my assumption holds as long as the bit-level representation of floating point numbers is IEEE 754. Are there any real-world CPUs that people actually use (excluding in embedded devices, which this library doesn't target) that use some other representation that might break this assumption?
- https://en.wikipedia.org/wiki/Single-precision_floating-point_format
(binary32, akafloat
in systems that use IEEE754) - https://en.wikipedia.org/wiki/Double-precision_floating-point_format
(binary64, akadouble
in systems that use IEEE754)