How dangerous is it to compare floating point values?
Asked Answered
T

12

422

I know UIKit uses CGFloat because of the resolution independent coordinate system.

But every time I want to check if for example frame.origin.x is 0 it makes me feel sick:

if (theView.frame.origin.x == 0) {
    // do important operation
}

Isn't CGFloat vulnerable to false positives when comparing with ==, <=, >=, <, >? It is a floating point and they have unprecision problems: 0.0000000000041 for example.

Is Objective-C handling this internally when comparing or can it happen that a origin.x which reads as zero does not compare to 0 as true?

Tails answered 26/4, 2012 at 13:41 Comment(1)
It's mostly a problem for non-integer values, where rounding errors occur easily. Wrote a blog post that describes when rounding errors happen and how to estimate the size of potential errors.Buonomo
H
508

First of all, floating point values are not "random" in their behavior. Exact comparison can and does make sense in plenty of real-world usages. But if you're going to use floating point you need to be aware of how it works. Erring on the side of assuming floating point works like real numbers will get you code that quickly breaks. Erring on the side of assuming floating point results have large random fuzz associated with them (like most of the answers here suggest) will get you code that appears to work at first but ends up having large-magnitude errors and broken corner cases.

First of all, if you want to program with floating point, you should read this:

What Every Computer Scientist Should Know About Floating-Point Arithmetic

Yes, read all of it. If that's too much of a burden, you should use integers/fixed point for your calculations until you have time to read it. :-)

Now, with that said, the biggest issues with exact floating point comparisons come down to:

  1. The fact that lots of values you may write in the source, or read in with scanf or strtod, do not exist as floating point values and get silently converted to the nearest approximation. This is what demon9733's answer was talking about.

  2. The fact that many results get rounded due to not having enough precision to represent the actual result. An easy example where you can see this is adding x = 0x1fffffe and y = 1 as floats. Here, x has 24 bits of precision in the mantissa (ok) and y has just 1 bit, but when you add them, their bits are not in overlapping places, and the result would need 25 bits of precision. Instead, it gets rounded (to 0x2000000 in the default rounding mode).

  3. The fact that many results get rounded due to needing infinitely many places for the correct value. This includes both rational results like 1/3 (which you're familiar with from decimal where it takes infinitely many places) but also 1/10 (which also takes infinitely many places in binary, since 5 is not a power of 2), as well as irrational results like the square root of anything that's not a perfect square.

  4. Double rounding. On some systems (particularly x86), floating point expressions are evaluated in higher precision than their nominal types. This means that when one of the above types of rounding happens, you'll get two rounding steps, first a rounding of the result to the higher-precision type, then a rounding to the final type. As an example, consider what happens in decimal if you round 1.49 to an integer (1), versus what happens if you first round it to one decimal place (1.5) then round that result to an integer (2). This is actually one of the nastiest areas to deal with in floating point, since the behaviour of the compiler (especially for buggy, non-conforming compilers like GCC) is unpredictable.

  5. Transcendental functions (trig, exp, log, etc.) are not specified to have correctly rounded results; the result is just specified to be correct within one unit in the last place of precision (usually referred to as 1ulp).

When you're writing floating point code, you need to keep in mind what you're doing with the numbers that could cause the results to be inexact, and make comparisons accordingly. Often times it will make sense to compare with an "epsilon", but that epsilon should be based on the magnitude of the numbers you are comparing, not an absolute constant. (In cases where an absolute constant epsilon would work, that's strongly indicative that fixed point, not floating point, is the right tool for the job!)

Edit: In particular, a magnitude-relative epsilon check should look something like:

if (fabs(x-y) < K * FLT_EPSILON * fabs(x+y))

Where FLT_EPSILON is the constant from float.h (replace it with DBL_EPSILON fordoubles or LDBL_EPSILON for long doubles) and K is a constant you choose such that the accumulated error of your computations is definitely bounded by K units in the last place (and if you're not sure you got the error bound calculation right, make K a few times bigger than what your calculations say it should be).

Finally, note that if you use this, some special care may be needed near zero, since FLT_EPSILON does not make sense for denormals. A quick fix would be to make it:

if (fabs(x-y) < K * FLT_EPSILON * fabs(x+y) || fabs(x-y) < FLT_MIN)

and likewise substitute DBL_MIN if using doubles.

Happening answered 26/4, 2012 at 14:33 Comment(23)
fabs(x+y) is problematic if x and y (can) have different sign. Still, a good answer against the tide of cargo-cult comparisons.Winwaloe
If x and y have different sign, it's no problem. The righthand side will be "too small", but since x and y have different sign, they should not compare equal anyway. (Unless they are so small as to be denormal, but then the second case catches it)Happening
I'm curious about your statement: "especially for buggy, non-conformant compilers like GCC". Is really GCC buggy and also non-conformant?Fronnia
Thanks @R.. for your complete explanations! It's good to know that the option -std=c99 is useful to circumvent this problem.Fronnia
Since the question is tagged iOS, it's worth noting that Apple's compilers (both clang and Apple's gcc builds) have always used FLT_EVAL_METHOD = 0, and attempt to be completely strict about not carrying excess precision. If you find any violations of that, please file bug reports.Nissen
@R..: Apple never shipped an x86 processor without SSE2. All arithmetic other than long double is done on SSE, even on 32-bit.Nissen
OK that makes sense, but I think your usage of "Apple's compilers" was unclear. I assumed it applied to clang (Apple's compiler) on any target rather than just on Apple systems.Happening
This answer doesn't seem to explain how "erring on the side of assuming floating point results have large random fuzz associated with them will get you code that appears to work at first but ends up having large-magnitude errors and broken corner cases."Paton
@R.: You might find this question interesting. I would certainly like to know your opinion on it, if any.Mitten
"First of all, floating point values are not "random" in their behavior. Exact comparison can and does make sense in plenty of real-world usages." - Just two sentences and already earned a +1! That's one of the most disturbing misassumptions people make when working with floating points.Oeildeboeuf
@R.. Would you approve this? gist.github.com/hfossli/4616c778bea3a334f034 I replaced "K" with "accuracy" if that makes sense.Krouse
An issue not yet mentioned in this answer's list is that if a computation yields a "not a number" result, that answer will compare unequal to itself. That's not an issue when comparing a variable to a constant that isn't a NaN, but can be an issue when e.g. testing whether a floating-point value is in a table.Kristof
What's the absolute error? In other words, what's the smallest value of ERR for which fabs(x-y) < ERR is valid for all x and y?Libbey
So K should be a value more than 1 if we were to make it less accurate? Meaning more margin for error.Diametral
Note __FLT_EPSILON__ can be used if you do not desire to import float.hPuerperal
@AlbertRenshaw: That's not C. It's a gcc feature meant to be used only for implementation internals. float.h is a standard freestanding header and there's no reason not to include it.Happening
If code is using FLT_EPSILON implying a float computation, at least in C, makes more sense to use float fabsf(float), than double fabs(double ).Phyl
Could you explain why you compute fabs(x-y) < K * FLT_EPSILON * fabs(x+y) instead of fabs(x-y) < K * FLT_EPSILON * fabs(x) or fabs(x-y) < K * FLT_EPSILON * fabs(y) which express a relative error?Romberg
@DanielFischer Any idea why the author did this?Romberg
@Maggyero: Because the form I put it in is symmetric in x and y, which is a property you would normally expect. Assuming x and y are close, x+y is approximately equal to 2*x or 2*y anyway (you could divide out that extra factor of 2 if you like) but doesn't have unexpected behavior from asymmetry.Happening
I like that symmetry argument. But is it correct to say that the three formulas ([…] fabs(x+y), […] fabs(x), […] fabs(y)) are basically equivalent (beside only the first one having the symmetry property), in the sense that they all give the same results in practice? And do you have a reference for the symmetric formula or it is from you (I cannot find it in David Goldberg’s article)?Romberg
In x86 64-bit mode floating-point calculations are handled by SSE and SSE2 instruction sets, x87 FPU instructions are no longer generated by your compiler. From Intel docs: "SSE and SSE2 extensions operate on the same single-precision and double-precision floating-point data types that the x87 FPU operates on. However, when operating on these data types, the SSE and SSE2 extensions operate on them in their native format (single-precision or double-precision), in contrast to the x87 FPU which extends them to double extended-precision floating-point format to perform computations..."Nagano
@MaximEgorushkin: "x86" in the answer referred to 32-bit, not x86_64. x87 is used on x86_64, but only for long double, and as such the double rounding does not apply.Happening
C
41

Since 0 is exactly representable as an IEEE754 floating-point number (or using any other implementation of f-p numbers I've ever worked with) comparison with 0 is probably safe. You might get bitten, however, if your program computes a value (such as theView.frame.origin.x) which you have reason to believe ought to be 0 but which your computation cannot guarantee to be 0.

To clarify a little, a computation such as :

areal = 0.0

will (unless your language or system is broken) create a value such that (areal==0.0) returns true but another computation such as

areal = 1.386 - 2.1*(0.66)

may not.

If you can assure yourself that your computations produce values which are 0 (and not just that they produce values which ought to be 0) then you can go ahead and compare f-p values with 0. If you can't assure yourself to the required degree, best stick to the usual approach of 'toleranced equality'.

In the worst cases the careless comparison of f-p values can be extremely dangerous: think avionics, weapons-guidance, power-plant operations, vehicle navigation, almost any application in which computation meets the real world.

For Angry Birds, not so dangerous.

Calvin answered 26/4, 2012 at 13:55 Comment(2)
Actually, 1.30 - 2*(0.65) is a perfect example of an expression that obviously evaluates to 0.0 if your compiler implements IEEE 754, because the doubles represented as 0.65 and 1.30 have the same significands, and multiplication by two is obviously exact.Harkey
Still getting rep from this one, so I changed the second example snippet.Calvin
B
24

I want to give a bit of a different answer than the others. They are great for answering your question as stated but probably not for what you need to know or what your real problem is.

Floating point in graphics is fine! But there is almost no need to ever compare floats directly. Why would you need to do that? Graphics uses floats to define intervals. And comparing if a float is within an interval also defined by floats is always well defined and merely needs to be consistent, not accurate or precise! As long as a pixel (which is also an interval!) can be assigned that's all graphics needs.

So if you want to test if your point is outside a [0..width[ range this is just fine. Just make sure you define inclusion consistently. For example always define inside is (x>=0 && x < width). The same goes for intersection or hit tests.

However, if you are abusing a graphics coordinate as some kind of flag, like for example to see if a window is docked or not, you should not do this. Use a boolean flag that is separate from the graphics presentation layer instead.

Bruell answered 14/5, 2012 at 4:19 Comment(0)
I
15

Comparing to zero can be a safe operation, as long as the zero wasn't a calculated value (as noted in an above answer). The reason for this is that zero is a perfectly representable number in floating point.

Talking perfectly representable values, you get 24 bits of range in a power-of-two notion (single precision). So 1, 2, 4 are perfectly representable, as are .5, .25, and .125. As long as all your important bits are in 24-bits, you are golden. So 10.625 can be repsented precisely.

This is great, but will quickly fall apart under pressure. Two scenarios spring to mind: 1) When a calculation is involved. Don't trust that sqrt(3)*sqrt(3) == 3. It just won't be that way. And it probably won't be within an epsilon, as some of the other answers suggest. 2) When any non-power-of-2 (NPOT) is involved. So it may sound odd, but 0.1 is an infinite series in binary and therefore any calculation involving a number like this will be imprecise from the start.

(Oh and the original question mentioned comparisons to zero. Don't forget that -0.0 is also a perfectly valid floating-point value.)

Incomplete answered 14/5, 2012 at 3:48 Comment(0)
O
12

[The 'right answer' glosses over selecting K. Selecting K ends up being just as ad-hoc as selecting VISIBLE_SHIFT but selecting K is less obvious because unlike VISIBLE_SHIFT it is not grounded on any display property. Thus pick your poison - select K or select VISIBLE_SHIFT. This answer advocates selecting VISIBLE_SHIFT and then demonstrates the difficulty in selecting K]

Precisely because of round errors, you should not use comparison of 'exact' values for logical operations. In your specific case of a position on a visual display, it can't possibly matter if the position is 0.0 or 0.0000000003 - the difference is invisible to the eye. So your logic should be something like:

#define VISIBLE_SHIFT    0.0001        // for example
if (fabs(theView.frame.origin.x) < VISIBLE_SHIFT) { /* ... */ }

However, in the end, 'invisible to the eye' will depend on your display properties. If you can upper bound the display (you should be able to); then choose VISIBLE_SHIFT to be a fraction of that upper bound.

Now, the 'right answer' rests upon K so let's explore picking K. The 'right answer' above says:

K is a constant you choose such that the accumulated error of your computations is definitely bounded by K units in the last place (and if you're not sure you got the error bound calculation right, make K a few times bigger than what your calculations say it should be)

So we need K. If getting K is more difficult, less intuitive than selecting my VISIBLE_SHIFT then you'll decide what works for you. To find K we are going to write a test program that looks at a bunch of K values so we can see how it behaves. Ought to be obvious how to choose K, if the 'right answer' is usable. No?

We are going to use, as the 'right answer' details:

if (fabs(x-y) < K * DBL_EPSILON * fabs(x+y) || fabs(x-y) < DBL_MIN)

Let's just try all values of K:

#include <math.h>
#include <float.h>
#include <stdio.h>

void main (void)
{
  double x = 1e-13;
  double y = 0.0;

  double K = 1e22;
  int i = 0;

  for (; i < 32; i++, K = K/10.0)
    {
      printf ("K:%40.16lf -> ", K);

      if (fabs(x-y) < K * DBL_EPSILON * fabs(x+y) || fabs(x-y) < DBL_MIN)
        printf ("YES\n");
      else
        printf ("NO\n");
    }
}
ebg@ebg$ gcc -o test test.c
ebg@ebg$ ./test
K:10000000000000000000000.0000000000000000 -> YES
K: 1000000000000000000000.0000000000000000 -> YES
K:  100000000000000000000.0000000000000000 -> YES
K:   10000000000000000000.0000000000000000 -> YES
K:    1000000000000000000.0000000000000000 -> YES
K:     100000000000000000.0000000000000000 -> YES
K:      10000000000000000.0000000000000000 -> YES
K:       1000000000000000.0000000000000000 -> NO
K:        100000000000000.0000000000000000 -> NO
K:         10000000000000.0000000000000000 -> NO
K:          1000000000000.0000000000000000 -> NO
K:           100000000000.0000000000000000 -> NO
K:            10000000000.0000000000000000 -> NO
K:             1000000000.0000000000000000 -> NO
K:              100000000.0000000000000000 -> NO
K:               10000000.0000000000000000 -> NO
K:                1000000.0000000000000000 -> NO
K:                 100000.0000000000000000 -> NO
K:                  10000.0000000000000000 -> NO
K:                   1000.0000000000000000 -> NO
K:                    100.0000000000000000 -> NO
K:                     10.0000000000000000 -> NO
K:                      1.0000000000000000 -> NO
K:                      0.1000000000000000 -> NO
K:                      0.0100000000000000 -> NO
K:                      0.0010000000000000 -> NO
K:                      0.0001000000000000 -> NO
K:                      0.0000100000000000 -> NO
K:                      0.0000010000000000 -> NO
K:                      0.0000001000000000 -> NO
K:                      0.0000000100000000 -> NO
K:                      0.0000000010000000 -> NO

Ah, so K should be 1e16 or larger if I want 1e-13 to be 'zero'.

So, I'd say you have two options:

  1. Do a simple epsilon computation using your engineering judgement for the value of 'epsilon', as I've suggested. If you are doing graphics and 'zero' is meant to be a 'visible change' than examine your visual assets (images, etc) and judge what epsilon can be.
  2. Don't attempt any floating point computations until you've read the non-cargo-cult answer's reference (and gotten your Ph.D in the process) and then use your non-intuitive judgement to select K.
Oleta answered 26/4, 2012 at 13:51 Comment(2)
One aspect of resolution-independence is that you cannot tell for sure what a "visible shift" is at compile-time. What is invisible on a super-HD screen might very well be obvious on a tiny-ass screen. One should at least make it a function of screen size. Or name it something else.Viccora
But at least selecting 'visible shift' is based on easily understood display (or frame) properties - unlike the <correct answer's> K which is difficult and non-intuitive to select.Oleta
S
6

The correct question: how does one compare points in Cocoa Touch?

The correct answer: CGPointEqualToPoint().

A different question: Are two calculated values are the same?

The answer posted here: They are not.

How to check if they are close? If you want to check if they are close, then don't use CGPointEqualToPoint(). But, don't check to see if they are close. Do something that makes sense in the real world, like checking to see if a point is beyond a line or if a point is inside a sphere.

Stallard answered 7/5, 2013 at 12:9 Comment(0)
P
4

The last time I checked the C standard, there was no requirement for floating point operations on doubles (64 bits total, 53 bit mantissa) to be accurate to more than that precision. However, some hardware might do the operations in registers of greater precision, and the requirement was interpreted to mean no requirement to clear lower order bits (beyond the precision of the numbers being loaded into the registers). So you could get unexpected results of comparisons like this depending on what was left over in the registers from whoever slept there last.

That said, and despite my efforts to expunge it whenever I see it, the outfit where I work has lots of C code that is compiled using gcc and run on linux, and we have not noticed any of these unexpected results in a very long time. I have no idea whether this is because gcc is clearing the low-order bits for us, the 80-bit registers are not used for these operations on modern computers, the standard has been changed, or what. I'd like to know if anyone can quote chapter and verse.

Parlor answered 14/5, 2012 at 1:41 Comment(0)
M
0

You can use such code for compare float with zero:

if ((int)(theView.frame.origin.x * 100) == 0) {
    // do important operation
}

This will compare with 0.1 accuracy, that enough for CGFloat in this case.

Metapsychology answered 22/1, 2017 at 18:43 Comment(2)
Casting to int without insuring theView.frame.origin.x is in/near that range of int leads to undefined behavior (UB) - or in this case, 1/100th the range of int.Phyl
There's absolutely no reason to convert to integer like this. As chux said, there's the potential for UB from out-of-range values; and on some architectures this will be significantly slower than just doing the computation in floating point. Lastly, multiplying by 100 like that will compare with 0.01 precision, not 0.1.Alic
C
0

Another issue that may need to be kept in mind is that different implementations do things differently. One example of this that I am very familiar with is the FP units on the Sony Playstation 2. They have significant discrepancies when compared to the IEEE FP hardware in any X86 device. The cited article mentions the complete lack of support for inf and NaN, and it gets worse.

Less well known is what I came to know as the "one bit multiply" error. For certain values of float x:

    y = x * 1.0;
    assert(y == x);

would fail the assert. In the general case, sometimes, but not always, the result of a FP multiply on the Playstation 2 had a mantissa that was a single bit less than the equivalent IEEE mantissa.

My point being that you should not assume that porting FP code from one platform to another will produce the same results. Any given platform is internally consistent, in that results don't change on that platform, it's just that they may not agree with a different platform. E.g. CPython on X86 uses 64 bit doubles to represent floats, while CircuitPython on a Cortex MO has to use software FP, and only uses 32 bit floats. Needless to say that will introduce discrepancies.

A quote I learned over 40 years ago is as true today as the day I learned it. "Doing floating point maths on a computer is like moving a pile of sand. Every time you do anything, you leave a little sand behind and pick up a little dirt."

Playstation is a registered trademark of Sony Corporation.

Charcuterie answered 1/9, 2022 at 22:35 Comment(0)
S
-1
-(BOOL)isFloatEqual:(CGFloat)firstValue secondValue:(CGFloat)secondValue{

BOOL isEqual = NO;

NSNumber *firstValueNumber = [NSNumber numberWithDouble:firstValue];
NSNumber *secondValueNumber = [NSNumber numberWithDouble:secondValue];

isEqual = [firstValueNumber isEqualToNumber:secondValueNumber];

return isEqual;

}

Southeaster answered 12/3, 2019 at 16:43 Comment(0)
D
-1

I am using the following comparison function to compare a number of decimal places:

bool compare(const double value1, const double value2, const int precision)
{
    int64_t magnitude = static_cast<int64_t>(std::pow(10, precision));
    int64_t intValue1 = static_cast<int64_t>(value1 * magnitude);
    int64_t intValue2 = static_cast<int64_t>(value2 * magnitude);
    return intValue1 == intValue2;
}

// Compare 9 decimal places:
if (compare(theView.frame.origin.x, 0, 9)) {
    // do important operation
}
Despiteful answered 23/3, 2020 at 10:17 Comment(0)
S
-7

I'd say the right thing is to declare each number as an object, and then define three things in that object: 1) an equality operator. 2) a setAcceptableDifference method. 3)the value itself. The equality operator returns true if the absolute difference of two values is less than the value set as acceptable.

You can subclass the object to suit the problem. For example, round bars of metal between 1 and 2 inches might be considered of equal diameter if their diameters differed by less than 0.0001 inches. So you'd call setAcceptableDifference with parameter 0.0001, and then use the equality operator with confidence.

Schlicher answered 2/5, 2012 at 22:52 Comment(2)
This is Not A Good Answer. First, the whole "object thing" does nothing whatsoever to solve your issue. And second, your actual implementation of "equality" isn't in fact the correct one.Machinery
Tom, maybe you'd think again about the "object thing". With real numbers, represented to high precision, equality rarely happens. But one's idea of equality may be tailored if it suits you. It would be nicer if there was an overridable 'approximately equal' operator, but there ain't.Schlicher

© 2022 - 2024 — McMap. All rights reserved.