Do any real-world CPUs not use IEEE 754?
Asked Answered
B

5

42

I'm optimizing a sorting function for a numerics/statistics library based on the assumption that, after filtering out any NaNs and doing a little bit twiddling, floats can be compared as 32-bit ints without changing the result and doubles can be compared as 64-bit ints.

This seems to speed up sorting these arrays by somewhere on the order of 40%, and my assumption holds as long as the bit-level representation of floating point numbers is IEEE 754. Are there any real-world CPUs that people actually use (excluding in embedded devices, which this library doesn't target) that use some other representation that might break this assumption?


Bridgeman answered 10/2, 2010 at 4:44 Comment(2)
very interesting - could you elaborate on how you're comparing them as ints?Partitive
Note that you're probably also assuming integer endian = FP endian, which is true on most (all?) "normal" CPUs, but it's possible at least in theory for them to differ on an IEEE754-compliant machine.Horsa
A
27

Other than flawed Pentiums, any x86 or x64-based CPU is using IEEE 754 as their floating-point arithmetic standard.

Here are a brief overview of the FPA standards and their adoptions.

IEEE 754:       Intel x86, and all RISC systems (IBM Power
                and PowerPC, Compaq/DEC Alpha, HP PA-RISC,
                Motorola 68xxx and 88xxx, SGI (MIPS) R-xxxx,
                Sun SPARC, and others);

VAX:            Compaq/DEC

IBM S/390:      IBM (however, in 1998, IBM added an IEEE 754
                option to S/390)

Cray:           X-MP, Y-MP, C-90; other Cray models have been
                based on Alpha and SPARC processors with
                IEEE-754 arithmetic.

Unless your planning on supporting your library on fairly exotic CPU architectures, it is safe to assume that for now 99% of CPUs are IEEE 754 compliant.

Arbutus answered 10/2, 2010 at 4:55 Comment(4)
It varies. Many real-world implementations of the architectures on your list nearly support IEEE754, but with caveats like not having the full set of NaNs, forcing denorms to zero, errors of an ULP or two in multiplication / division results, having multiplication differ by an ULP or two depending on operand order etc. So "99% of CPUs are IEEE754 compliant" needs a disclaimer - the spirit is true and for the purposes of the question you are correct, but in general the devil is often in the detail. More like, 99% of CPUs are 99% IEEE754 compliant.Shrive
Exotic architectures the standards committees care aboutStuddingsail
@phuclv: Architectures where full support for IEEE-754 semantics can be expensive aren't remotely exotic. Even on modern CPUs, code which has to handle all possible corner cases for infinities, NaNs, and denormals will often be significantly slower than code which can be optimized in ways that slightly change such behaviors.Semicentennial
@moonshadow: For the purposes of the exact question asked, those flush-to-zero and lack of precision don't matter. The format of float/double in memory is still IEEE754 binary32 and binary64, and therefore integer comparisons can be used (as long as they're all positive, or the sign bit is handled appropriately). Also assuming that float endian = integer endian, which this answer doesn't address.Horsa
S
15

It depends on where you draw the line between the "real world" and the imaginary one.

  1. Vax G format is still supported on Alpha machines (which HP says they will support through at least 2013).
  2. IBM hexadecimal FP is still supported by IBM z-series mainframes. They've added IEEE binary and decimal support, but from what I've heard they're rarely used, because the hexadecimal FP is quite a bit faster (IBM's been optimizing it for about 45 years now...)

Until fairly recently, Unisys still sold ClearPath IX servers that supported the Burroughs FP format, and ClearPath MCP machines that supported the Univac FP format. I believe those are now only run in emulation (on Xeons) but from a software viewpoint, they'll probably continue in active use for another decade or more.

There are even a few people still using DtCyber to run Plato on (emulated) Control Data mainframes, with their unique floating point format. (Sorry, but my first serious programming was on a CDC Cyber machine, so I couldn't resist bringing it up, even if it hasn't been "real world" for decades).

Scolex answered 10/2, 2010 at 5:24 Comment(0)
B
7

The Cell Processor's SPUs differ in a few ways (like lack of INF and NANs), but I don't think there are differences would break your assumptions...

Benzel answered 10/2, 2010 at 4:56 Comment(2)
Good point. The ARM-Neon SIMD-Unit (used in the newer iPhone and other mobile devices) differs in a few ways as well. The CPU is able to execute conformant float computations in VPF-mode though. Oh, and the MIPS R5900 (PlayStation 2) had some issues as well. Most noticable the last mantissa-bit of a multiplication was undefined.Collage
I currently have three separate pieces of embedded hardware on my desk at work with three different PowerPC-derived CPUs that are noncompliant in three different ways...Shrive
D
5

PowerPC processors (Macs until about 2006-2007, tons of current IBM servers) use a 128 bit format consisting of two doubles for long double, instead if the IEEE 754 extended format.

However, in C or Objective-C, there is no portable way to interpret a 32 bit or 64 bit floating point number as an integer (assuming float and uint32_t, or double and uint64_t have the same number of bits). When I needed to do that kind of thing, I had to write different code depending on the compiler (one was using a union, one was by casting double* to long long*). No idea whether a reinterpretcast in C++ will do it portably.

Downes answered 30/8, 2014 at 21:52 Comment(2)
it's double-double arithmeticStuddingsail
The 1980s Macintosh "Standard Apple Numerical Environment" used 32, 64, and 80-bit floating-point types, with the 80-bit type being the fastest, since the exponent and mantissa could easily be loaded into registers without bit masking, and while I don't know if SANE took advantage of this fact, deferred normalization can avoid what can sometimes be one of the slowest parts of repeated floating-point addition.Semicentennial
S
1

Many real-world CPUs don't have any native floating-point format. Many implementations of C and other languages for such CPUs bundle libraries that use IEEE-754 single and double-precision formats and omit the extended-precision format despite the fact that other formats would be more suitable for many purposes.

Semicentennial answered 23/3, 2019 at 18:4 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.