Emulating FP64 with 2 FP32 on a GPU
Asked Answered
H

1

11

If one were to emulate double precision floating point with two single precision floating points what would the performance be like, and can it be done well?

Currently Nvidia is charging quite a premium for double precision enabled Tesla cards which enable you to get one third of the single precision performance (notable exceptions Titan/Titan Black).

If one were to use a Geforce GPU with gimped double precision and emulate double precision using 2 single precision floats what would the performance be like?

Hauck answered 30/3, 2015 at 11:17 Comment(7)
https://mcmap.net/q/15558/-emulate-quot-double-quot-using-2-quot-float-quot-sOdoriferous
I saw that question. The articles are interesting but there seems to be accuracy issues with the emulation. Furthermore, the Andrew Thall article doesn't discuss performance, from what I can tell what performance was discussed was deleted from the article in 2009. The other article gives a 1/2.5 ratio if I'm reading it right. It seems to me like nobody is using this. You can find those article by typing "float-float GPU" in google and inquieries are from before CUDA, none since. I was hoping to hear from someone who might know more.Hauck
Modern GPUs have sinle-precision FMA (fused multiply-add) which allows a double-float to be implemented in about 8 instructions. The hard part is the double-float addition. If done accurately, it needs about 20 instructions. Note that double-float provides fewer bits than proper IEEE-754 double precision, also there is no correct rounding. Effective accuracy is around 44 bits vs 53 for double. Since double-float operations also increase register pressure compared with double, an overall estimate of double-float executing at 1/20 the speed of native IEEE-754 float seems reasonably conservative.Kallman
I think the andrew thall work is still considered pretty representative of how to do it. Scott LeGrand is an expert on exploiting mixed-precisional approaches on GPUs and gave a talk at GTC2015 which touches on several approaches. I don't think using 2 SP to emulate an FP is a "common approach" on GPUs, it requires considerable coding expertise and the library you end up with is probably not a perfect replacement for IEEE-754 DP FP.Paraldehyde
Often, single-precision arithmetic combined with the use of certain mathematical idioms that employ error compensation in places critical to accuracy can provide much of the benefit of full-fledged double-float at lower cost. I gave brief overview of relevant literature of compensated computational methods in this recent post on the NVIDIA developer forumsKallman
My estimate of double-float accuracy was a bit too pessimistic. Except for operands below ~1e-30, where the tail can become denormalized or zero, a double-float can represent 24+24+1=49 "mantissa" bits (note the 1 bit is actually the sign bit of the tail). The instruction sequences for the double-float operations, carefully coded, may lose about two or three of those, retaining more like 46 or 47 bits, compared to the equivalent double-precision computation with 53 bits.Kallman
In a previous comment I inadvertently omitted a word: I meant to say that FMA allows a double-float multiplication to be implemented in about 8 instructions (and the addition in about 20).Kallman
K
15

You can get a rough estimate of the performance by counting the number of float operations required to implement each double-float operation. You would want to inspect binary code with cuobjdump --dump-sass to get an accurate count. I am showing a double-float multiplication below that takes full advantage of FMA (fused multiply-add) support on the GPU. For double-float addition code, I would point you to a paper by Andrew Thall as I do not have the time to code this up right now. From previous analysis I believe the addition code given in the paper is correct, and that it avoids common pitfalls in faster but less accurate implementations (which lose accuracy when the magnitude of the operands is within a factor of two).

If you are a registered CUDA developer you can download double-double code from NVIDIA's developer website (log in at https://developer.nvidia.com) which is under BSD license, and rework it relatively quickly into double-float code. NVIDIA's double-double code supports the operations addition, subtraction, division, square root, and reciprocal square root.

As you can see, the multiplication below requires 8 float instructions; unary negation is absorbed into FMA. The addition requires around 20 float instructions. However, the instruction sequences for double-float operations also require temporary variables, which increases register pressure and can decrease occupancy. A reasonably conservative estimate may therefore be that double-float arithmetic performs at 1/20 the throughput of native float arithmetic. You can easily measure this yourself, in the context relevant to you, i.e. your use case(s).

typedef float2 dblfloat;  // .y = head, .x = tail

__host__ __device__ __forceinline__ 
dblfloat mul_dblfloat (dblfloat x, dblfloat y)
{
    dblfloat t, z;
    float sum;
    t.y = x.y * y.y;
    t.x = fmaf (x.y, y.y, -t.y);
    t.x = fmaf (x.x, y.x, t.x);
    t.x = fmaf (x.y, y.x, t.x);
    t.x = fmaf (x.x, y.y, t.x);
    /* normalize result */
    sum = t.y + t.x;
    z.x = (t.y - sum) + t.x;
    z.y = sum;
    return z;
}

Note that in various applications, full double-float arithmetic may not be necessary. Instead one can use float computation, augmented by error compensating techniques, one of the oldest of which is the Kahan summation. I gave a brief overview of easily available literature on such methods in a recent posting in the NVIDIA developer forums. In the comments above, Robert Crovella also pointed to a GTC 2015 talk by Scott LeGrand, which I haven't had time to check out yet.

As for accuracy, double-float has a representational precision of 49 (24+24+1) bits, compared with IEEE-755 double which provides 53 bits. However double-float cannot maintain this precision for operands small in magnitude, as the tail portion can become a denormal or zero. When denormal support is turned on, the 49 bits of precision are guaranteed for 2-101 <= |x| < 2128. Denormal support for float is turned on by default in the CUDA tool chain for architectures >= sm_20, which means all architectures supported by the currently shipping version, CUDA 7.0.

As opposed to operations on IEEE-754 double data, double-float operations are not correctly rounded. For the double-float multiplication above, using 2 billion random test cases (with all source operands and results within the bounds stated above), I observed an upper bound of 1.42e-14 for the relative error. I do not have data for the double-float addition, but its error bound should be similar.

Kallman answered 30/3, 2015 at 16:54 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.