Is floating-point == ever OK?
Asked Answered
T

14

59

Just today I came across third-party software we're using and in their sample code there was something along these lines:

// Defined in somewhere.h
static const double BAR = 3.14;

// Code elsewhere.cpp
void foo(double d)
{
    if (d == BAR)
        ...
}

I'm aware of the problem with floating-points and their representation, but it made me wonder if there are cases where float == float would be fine? I'm not asking for when it could work, but when it makes sense and works.

Also, what about a call like foo(BAR)? Will this always compare equal as they both use the same static const BAR?

Tinsel answered 13/1, 2011 at 17:5 Comment(3)
I always thought that foo == bar but bar != pi :)Christoffer
Who downvoted this? It's a great question.Printable
A closely related must read blog in-depth on the topic: randomascii.wordpress.com/2013/07/16/floating-point-determinismIapetus
F
36

There are two ways to answer this question:

  1. Are there cases where float == float gives the correct result?
  2. Are there cases where float == float is acceptable coding?

The answer to (1) is: Yes, sometimes. But it's going to be fragile, which leads to the answer to (2): No. Don't do that. You're begging for bizarre bugs in the future.

As for a call of the form foo(BAR): In that particular case the comparison will return true, but when you are writing foo you don't know (and shouldn't depend on) how it is called. For example, calling foo(BAR) will be fine but foo(BAR * 2.0 / 2.0) (or even maybe foo(BAR * 1.0) depending on how much the compiler optimises things away) will break. You shouldn't be relying on the caller not performing any arithmetic!

Long story short, even though a == b will work in some cases you really shouldn't rely on it. Even if you can guarantee the calling semantics today maybe you won't be able to guarantee them next week so save yourself some pain and don't use ==.

To my mind, float == float is never* OK because it's pretty much unmaintainable.

*For small values of never.

Freeliving answered 13/1, 2011 at 17:10 Comment(13)
Actually everything related to floating point is quite standard and not likely to change.Angulate
@Alexandre: I meant that today the caller uses foo(BAR), but tomorrow they might change it to foo(BAR * 1.0)Freeliving
@Cameron: thanks for the edit. Very nicely explained. I'm still wondering though as in the case I saw today in a third party lib, they were using a magic number static const which had a special meaning and their sample was indicating that one should check for that special value as value == MAGIC_NUMBER. I wonder even if it's not considered good programming if it's working in their case. What if the value compared to the const is in their control and they always assign that to MAGIC_NUMBER before calling into client code from their code? Do you understand what I mean?Tinsel
@Murrekatt: Yes, I understand. In that case it should all work OK, but it relies on good documentation to make sure that the caller uses the right constant. This is generally a fairly fragile approach - it's very easy to not read or to misread documentation. Even if it's all written by one developer, when you come back in six months and try to maintain the code it's likely to cause confusion (at best) and weird bugs (at worst). Of course, sometimes you need nasty hacks and this might be one of those cases.Freeliving
@Cameron: that was what I was guessing, because otherwise it would be very embarassing for the library developers to have something broken like that in there. Thanks for explaining it.Tinsel
@Alexandre: Remember that you really can't use == if you're dealing with NaN. By definition, NaN == <anything> is false, even if <anything> is also NaN. You need to use std::isnan to check if a value is NaN.Freeliving
@Cameron: you use x != x to check whether x is NaN.Angulate
@Alexandre: Yuck. I know that will work correctly, but still...yuck. But that's just my opinion :) In any case, the warning about == remains - you cannot say x == NaN and expect it to ever be true.Freeliving
@Cameron: unless you never use == except for if (x == x) !Angulate
Are there cases where float == float is acceptable coding? No - e.g. you mean the source code of libm is not acceptable? sourceware.org/git/?p=glibc.git;a=tree browse math or sysdeps/ieee_754Elea
I think the answer to (1) is "Yes, always". float == float returns true if and only if float equals float. That is, the behaviour of the expression is always correct. Otherwise, you are using a buggy compiler or the language would be wrongly defined.Patronize
@DanielDaranas: Returns true if they're both the same and aren't (both) NaN. A proper normal equality operator should test test for an equivalence relation, at least when testing things of like type (IMHO, languages should reject usages of == with types where it can't test an equivalence relation). Alas, the only way a language can do that is to go against IEEE-754. Python was willing to do the right thing with integer divide/modulus even though many other languages don't, but even Python implemented the IEEE-style broken == operator.Rectrix
@DanielDaranas: The most logical reason to test floating-point values for exact equality is to say whether doing a computation on one value will yield the same result was achieved by doing it on a possibly-different value. If the values are equivalent, there's no point repeating the computation. Unfortunately, the IEEE == operation is useless for that.Rectrix
S
40

Yes, you are guaranteed that whole numbers, including 0.0, compare with ==

Of course you have to be a little careful with how you got the whole number in the first place, assignment is safe but the result of any calculation is suspect

ps there are a set of real numbers that do have a perfect reproduction as a float (think of 1/2, 1/4 1/8 etc) but you probably don't know in advance that you have one of these.

Just to clarify. It is guaranteed by IEEE 754 that float representions of integers (whole numbers) within range, are exact.

float a=1.0;
float b=1.0;
a==b  // true

But you have to be careful how you get the whole numbers

float a=1.0/3.0;
a*3.0 == 1.0  // not true !!
Systematist answered 13/1, 2011 at 17:8 Comment(15)
+1: Your answer is almost perfect - you need "Of course" instead of "of course" ;)Jovanjove
To be fair, the guarantees and behaviour for whole numbers are no different than for any other values.Conglobate
@Martin Beckett: can you expand on that with whole numbers and 0.0 please? I'm not asking if it sometimes will work, rather if it works (all the time and makes sense).Tinsel
i would add that the "set of real numbers" consist of numbers that can be represented as finite sum of 1/2^n and max n should be less than number of bits in mantis.Uriisa
@Tinsel clarified the answer a little, 'makes sense' is tricky - I would be careful where I used it just to avoid confusion.Systematist
@Martin: I'm afraid your edits don't clarify very much. Again, there are no special guarantees for integers.Conglobate
@Martin: I know that it's highly subjective that. Didn't know how to express myself. So you're saying that any float as a whole will compare fine. Also a=1.0 and b=1.0000? What about a static const double compared to itself? In my exampl above through foo(BAR);Tinsel
@murrekatt, yes 1.0==1.0000 but in general you can't assume any fraction eg 3.14 has an exact representation. @Oli true but most integers upto (1 + ceiling(p×log10 2)) have an exact representationSystematist
@Martin: I guess the distinction you're making (vs. e.g. non-integers) is in the context of decimal literals in source code? I would agree that these are always representable (which isn't true of arbitrary fractional values in decimal). But comparisons with literals form a small subset of all possible comparisons! If you make this clear in your answer (specifically, "literals"), I'll remove my downvote.Conglobate
@Martin: I see. What do you mean with "most integers upto"...?Tinsel
IEEE also guarantees that some operations are accurate to 0.5 ulp. If division is one of them, then your a/3.0 example is OK, since the double value closest to the "mathematical value" is exactly 1.0, and hence this must be the result of the division. But this is of course all assuming IEEE floating point, not just generic C++.Systematic
@Martin Beckett: float a=3.0; a/3.0 == 1.0 // not true !!!` I think you'll find that this is true on any compiler you care to test it on. Perhaps you're thinking of: float a = 1/3.0 ; 3.0 * a == 1.0 ;Garey
@Garey - yes I over simplified it to make the point that an 'integer' result of a calcualtion isn't necessarily an integer. Will edit the answerSystematist
Your "within range" sounds sloppy: 2^53+1 is also within the range (i.e. between min and max), but can't be represented exactly.Lowery
Just no. Even if you are super-careful and use it in only in ways that ensure predictable results, think about the poor next-generation developers who won't know about mantissas and exponents, and will learn bad habits from such examples. I would only ever use float==0.0, and then only with an explanatory comment about the dangers of using float == xxxx in other cases.Anibalanica
F
36

There are two ways to answer this question:

  1. Are there cases where float == float gives the correct result?
  2. Are there cases where float == float is acceptable coding?

The answer to (1) is: Yes, sometimes. But it's going to be fragile, which leads to the answer to (2): No. Don't do that. You're begging for bizarre bugs in the future.

As for a call of the form foo(BAR): In that particular case the comparison will return true, but when you are writing foo you don't know (and shouldn't depend on) how it is called. For example, calling foo(BAR) will be fine but foo(BAR * 2.0 / 2.0) (or even maybe foo(BAR * 1.0) depending on how much the compiler optimises things away) will break. You shouldn't be relying on the caller not performing any arithmetic!

Long story short, even though a == b will work in some cases you really shouldn't rely on it. Even if you can guarantee the calling semantics today maybe you won't be able to guarantee them next week so save yourself some pain and don't use ==.

To my mind, float == float is never* OK because it's pretty much unmaintainable.

*For small values of never.

Freeliving answered 13/1, 2011 at 17:10 Comment(13)
Actually everything related to floating point is quite standard and not likely to change.Angulate
@Alexandre: I meant that today the caller uses foo(BAR), but tomorrow they might change it to foo(BAR * 1.0)Freeliving
@Cameron: thanks for the edit. Very nicely explained. I'm still wondering though as in the case I saw today in a third party lib, they were using a magic number static const which had a special meaning and their sample was indicating that one should check for that special value as value == MAGIC_NUMBER. I wonder even if it's not considered good programming if it's working in their case. What if the value compared to the const is in their control and they always assign that to MAGIC_NUMBER before calling into client code from their code? Do you understand what I mean?Tinsel
@Murrekatt: Yes, I understand. In that case it should all work OK, but it relies on good documentation to make sure that the caller uses the right constant. This is generally a fairly fragile approach - it's very easy to not read or to misread documentation. Even if it's all written by one developer, when you come back in six months and try to maintain the code it's likely to cause confusion (at best) and weird bugs (at worst). Of course, sometimes you need nasty hacks and this might be one of those cases.Freeliving
@Cameron: that was what I was guessing, because otherwise it would be very embarassing for the library developers to have something broken like that in there. Thanks for explaining it.Tinsel
@Alexandre: Remember that you really can't use == if you're dealing with NaN. By definition, NaN == <anything> is false, even if <anything> is also NaN. You need to use std::isnan to check if a value is NaN.Freeliving
@Cameron: you use x != x to check whether x is NaN.Angulate
@Alexandre: Yuck. I know that will work correctly, but still...yuck. But that's just my opinion :) In any case, the warning about == remains - you cannot say x == NaN and expect it to ever be true.Freeliving
@Cameron: unless you never use == except for if (x == x) !Angulate
Are there cases where float == float is acceptable coding? No - e.g. you mean the source code of libm is not acceptable? sourceware.org/git/?p=glibc.git;a=tree browse math or sysdeps/ieee_754Elea
I think the answer to (1) is "Yes, always". float == float returns true if and only if float equals float. That is, the behaviour of the expression is always correct. Otherwise, you are using a buggy compiler or the language would be wrongly defined.Patronize
@DanielDaranas: Returns true if they're both the same and aren't (both) NaN. A proper normal equality operator should test test for an equivalence relation, at least when testing things of like type (IMHO, languages should reject usages of == with types where it can't test an equivalence relation). Alas, the only way a language can do that is to go against IEEE-754. Python was willing to do the right thing with integer divide/modulus even though many other languages don't, but even Python implemented the IEEE-style broken == operator.Rectrix
@DanielDaranas: The most logical reason to test floating-point values for exact equality is to say whether doing a computation on one value will yield the same result was achieved by doing it on a possibly-different value. If the values are equivalent, there's no point repeating the computation. Unfortunately, the IEEE == operation is useless for that.Rectrix
R
16

The other answers explain quite well why using == for floating point numbers is dangerous. I just found one example that illustrates these dangers quite well, I believe.

On the x86 platform, you can get weird floating point results for some calculations, which are not due to rounding problems inherent to the calculations you perform. This simple C program will sometimes print "error":

#include <stdio.h>

void test(double x, double y)
{
  const double y2 = x + 1.0;
  if (y != y2)
    printf("error\n");
}

void main()
{
  const double x = .012;
  const double y = x + 1.0;

  test(x, y);
}

The program essentially just calculates

x = 0.012 + 1.0;
y = 0.012 + 1.0;

(only spread across two functions and with intermediate variables), but the comparison can still yield false!

The reason is that on the x86 platform, programs usually use the x87 FPU for floating point calculations. The x87 internally calculates with a higher precision than regular double, so double values need to be rounded when they are stored in memory. That means that a roundtrip x87 -> RAM -> x87 loses precision, and thus calculation results differ depending on whether intermediate results passed via RAM or whether they all stayed in FPU registers. This is of course a compiler decision, so the bug only manifests for certain compilers and optimization settings :-(.

For details see the GCC bug: http://gcc.gnu.org/bugzilla/show_bug.cgi?id=323

Rather scary...

Additional note:

Bugs of this kind will generally be quite tricky to debug, because the different values become the same once they hit RAM.

So if for example you extend the above program to actually print out the bit patterns of y and y2 right after comparing them, you will get the exact same value. To print the value, it has to be loaded into RAM to be passed to some print function like printf, and that will make the difference disappear...

Radford answered 10/2, 2011 at 11:12 Comment(7)
Actually, I haven't seen a compiler recently (in the last ten years) that use the x87 FPU for single precision and double precision calculations.Graybeard
@gnasher729: So what does it use then? Calculating all FP ops in software would be too slow, wouldn't it? Is there another FPU in modern CPUs?Radford
Bug 323 is a bug. It was fixed several versions ago (use -std=c99 to get FLT_EVAL_METHOD==2 semantics with a modern GCC). Or, for a workaround, just use SSE2: it has been available for ten years now.Equipollent
@PascalCuoq: Yes, it seems this problem has been fixed. Still, I find it a nice illustration of the unexpected problems you can run into if you naively compare floating point numbers for equality. Note that (if I understand correctly) it took fixes both in the C standard (incorporated into C99) and in gcc to avoid this particular problem.Radford
@Radford The usual story is that a standard exists in fuzzy, informal language, compilers take liberties with it, and the liberties get written down as standard in the next revision so that people know where to stand. See also INT_MIN % -1 in C99 and C11. Bug 323 per se did not require any fix to the C99 standard, since it was reported in 2000.Equipollent
@sleske: SSE2 registers. Available since the very first version of the Pentium 4 processor, released in 2001. That's a bit over thirteen years ago.Graybeard
@Graybeard on 32-bit x86, gcc doesn't by default enable even -msse, let alone -msse2. Thus it can't by default have -mfpmath=sse. On x86_64, OTOH, it does exactly this, since the architecture guarantees SSE2 support.Lowery
I
9

I'll provide more-or-less real example of legitimate, meaningful and useful testing for float equality.

#include <stdio.h>
#include <math.h>

/* let's try to numerically solve a simple equation F(x)=0 */
double F(double x) {
    return 2 * cos(x) - pow(1.2, x);
}

/* a well-known, simple & slow but extremely smart method to do this */
double bisection(double range_start, double range_end) {
    double a = range_start;
    double d = range_end - range_start;
    int counter = 0;
    while (a != a + d) // <-- WHOA!!
    {
        d /= 2.0;
        if (F(a) * F(a + d) > 0) /* test for same sign */
            a = a + d;
    
        ++counter;
    }
    printf("%d iterations done\n", counter);
    return a;
}

int main() {
    /* we must be sure that the root can be found in [0.0, 2.0] */
    printf("F(0.0)=%.17f, F(2.0)=%.17f\n", F(0.0), F(2.0));

    double x = bisection(0.0, 2.0);

    printf("the root is near %.17f, F(%.17f)=%.17f\n", x, x, F(x));
}

I'd rather not explain the bisection method used itself, but emphasize on the stopping condition. It has exactly the discussed form: (a == a+d) where both sides are floats: a is our current approximation of the equation's root, and d is our current precision. Given the precondition of the algorithm — that there must be a root between range_start and range_end — we guarantee on every iteration that the root stays between a and a+d while d is halved every step, shrinking the bounds.

And then, after a number of iterations, d becomes so small that during addition with a it gets rounded to zero! That is, a+d turns out to be closer to a then to any other float; and so the FPU rounds it to the closest representable value: to a itself. Calculation on a hypothetical machine can illustrate; let it have 4-digit decimal mantissa and some large exponent range. Then what result should the machine give to 2.131e+02 + 7.000e-3? The exact answer is 213.107, but our machine can't represent such number; it has to round it. And 213.107 is much closer to 213.1 than to 213.2 — so the rounded result becomes 2.131e+02 — the little summand vanished, rounded up to zero. Exactly the same is guaranteed to happen at some iteration of our algorithm — and at that point we can't continue anymore. We have found the root to maximum possible precision.


Addendum

No you can't just use "some small number" in the stopping condition. For any choice of the number, some inputs will deem your choice too large, causing loss of precision, and there will be inputs which will deem your choiсe too small, causing excess iterations or even entering infinite loop. Imagine that our F can change — and suddenly the solutions can be both huge 1.0042e+50 and tiny 1.0098e-70. Detailed discussion follows.

Calculus has no notion of a "small number": for any real number, you can find infinitely many even smaller ones. The problem is, among those "even smaller" ones might be a root of our equation. Even worse, some equations will have distinct roots (e.g. 2.51e-8 and 1.38e-8) — both of which will get approximated by the same answer if our stopping condition looks like d < 1e-6. Whichever "small number" you choose, many roots which would've been found correctly to the maximum precision with a == a+d — will get spoiled by the "epsilon" being too large.

It's true however that floats' exponent has finite limited range, so one actually can find the smallest nonzero positive FP number; in IEEE 754 single precision, it's the 1e-45 denorm. But it's useless! while (d >= 1e-45) {…} will loop forever with single-precision (positive nonzero) d.

At the same time, any choice of the "small number" in d < eps stopping condition will be too small for many equations. Where the root has high enough exponent, the result of subtraction of two neighboring mantissas will easily exceed our "epsilon". For example, 7.00023e+8 - 7.00022e+8 = 0.00001e+8 = 1.00000e+3 = 1000 — meaning that the smallest possible difference between numbers with exponent +8 and 6-digit mantissa is... 1000! It will never fit into, say, 1e-4. For numbers with relatively high exponent we simply have not enough precision to ever see a difference of 1e-4. This means eps = 1e-4 will be too small!

My implementation above took this last problem into account; you can see that d is halved each step — instead of getting recalculated as difference of (possibly huge in exponent) a and b. For reals, it doesn't matter; for floats it does! The algorithm will get into infinite loops with (b-a) < eps on equations with huge enough roots. The previous paragraph shows why. d < eps won't get stuck, but even then — needless iterations will be performed during shrinking d way down below the precision of a — still showing the choice of eps as too small. But a == a+d will stop exactly at precision.

Thus as shown: any choice of eps in while (d < eps) {…} will be both too large and too small, if we allow F to vary.

... This kind of reasoning may seem overly theoretical and needlessly deep, but it's to illustrate again the trickiness of floats. One should be aware of their finite precision when writing arithmetic operators around.

Iapetus answered 9/2, 2011 at 22:9 Comment(5)
Could you not just test that d is less than some small number?Tinsel
I'm not saying I'd ever advocate using floating-point == like ever, but this is a well-reasoned, well-thought out answer and I gave it a +1.Printable
@Tinsel please see update. It shows how the algorithm would've had problems with a fixed small number used in the stopping condition.Iapetus
Yes, the test a != a+d works here. I would have still used a < a+d, though.Alamein
@KaiPetzke that's only syntactically different 😁 As a termination condition, a < a+d still relies on a == a+d becoming true eventually. That never becomes true in calculus reals!.. But it does in floats. It's good to know. Thanks for the pingback by the way, I took the opportunity to refactor/copyedit the fourteenth revision of the answer, haha... Hopefully a bit more readable now.Iapetus
J
8

Perfect for integral values even in floating point formats

But the short answer is: "No, don't use ==."

Ironically, the floating point format works "perfectly", i.e., with exact precision, when operating on integral values within the range of the format. This means that you if you stick with double values, you get perfectly good integers with a little more than 50 bits, giving you about +- 4,500,000,000,000,000, or 4.5 quadrillion.

In fact, this is how JavaScript works internally, and it's why JavaScript can do things like + and - on really big numbers, but can only << and >> on 32-bit ones.

Strictly speaking, you can exactly compare sums and products of numbers with precise representations. Those would be all the integers, plus fractions composed of 1 / 2n terms. So, a loop incrementing by n + 0.25, n + 0.50, or n + 0.75 would be fine, but not any of the other 96 decimal fractions with 2 digits.

So the answer is: while exact equality can in theory make sense in narrow cases, it is best avoided.

Jilly answered 13/1, 2011 at 17:34 Comment(5)
Obviously, it's also perfect for, e.g. values that can be expressed as integers within range, divided by a power of 2. The (slightly facetious) conclusion: In other words, floats are perfect for values that may be expressed as floats.Conglobate
but if you went too high, wouldn't you get something weird like 1,000,000,000,000,000,000,000,001 == 1,000,000,000,000,000,000,000,000 returning true?Carrollcarronade
So? If you go too high with int's they wrap around, which is even worse.Jilly
@Oli. Heh, nicely put. I've updated my answer to include fractions.Jilly
I think its one of those questions where the answer is "if you have to ask then don't do it". If you know what you are doing, on typica implementations today the == operator will produce results according to the IEEE754 standard, which means x == y if both are the same and not NaN, or if one is +0 and the other is -0.Graybeard
A
7

The only case where I ever use == (or !=) for floats is in the following:

if (x != x)
{
    // Here x is guaranteed to be Not a Number
}

and I must admit I am guilty of using Not A Number as a magic floating point constant (using numeric_limits<double>::quiet_NaN() in C++).

There is no point in comparing floating point numbers for strict equality. Floating point numbers have been designed with predictable relative accuracy limits. You are responsible for knowing what precision to expect from them and your algorithms.

Angulate answered 13/1, 2011 at 17:12 Comment(5)
Disagree: Fixed-point is susceptible to "built-in inaccuracy" too. In both cases, it's up to the programmer to ensure they're doing something sane. There will be plenty of cases where it's perfectly stable to check for equality. Of course, there will be many more cases where it's not.Conglobate
@Oli: I did not mention fixed point (note that sometimes, fixed point is what you need: money accounts are one example). Each time I have considered comparing floats for equality, it ended up being a design mistake. I really see no use for the equality operator for floats. If I were a language designer, I'd remove it because of all the beginner mistakes (and related questions on SO) that it causes.Angulate
@Alexandre: True, you didn't. But your answer implies that floating-point is somehow unique in having limitations!Conglobate
Fixed point is 100% precise as long as the operation doesn't require more precision than is available. Always! Floating point does not follow this rule. For example you can add or subtract the same precision fixed point numbers any number of times as long as you dont overflow the integer portion of the fixed number. Float will have some amount of error after the first operation -.-Meganmeganthropus
@Jimbo: That is what I meant to write. When you add N (random) floats of same magnitude and same sign, you lose N * epsilon (worst case), or optimistically sqrt(N) * epsilon, which can mean a lot in real world applications. And I did not mention summing numbers of different magnitude, or different sign. You sometimes avoid such problems by using fixed point or decimal numbers.Angulate
E
4

It's probably ok if you're never going to calculate the value before you compare it. If you are testing if a floating point number is exactly pi, or -1, or 1 and you know that's the limited values being passed in...

Eosinophil answered 13/1, 2011 at 17:9 Comment(1)
+1 Yes, after all the information posted here I realize that you're right and this is exactly what the third party library is relying on it seems to have == working.Tinsel
B
3

I also used it a few times when rewriting few algorithms to multithreaded versions. I used a test that compared results for single- and multithreaded version to be sure, that both of them give exactly the same result.

Bowls answered 17/1, 2011 at 0:34 Comment(0)
S
2

Let's say you have a function that scales an array of floats by a constant factor:

void scale(float factor, float *vector, int extent) {
   int i;
   for (i = 0; i < extent; ++i) {
      vector[i] *= factor;
   }
}

I'll assume that your floating point implementation can represent 1.0 and 0.0 exactly, and that 0.0 is represented by all 0 bits.

If factor is exactly 1.0 then this function is a no-op, and you can return without doing any work. If factor is exactly 0.0 then this can be implemented with a call to memset, which will likely be faster than performing the floating point multiplications individually.

The reference implementation of BLAS functions at netlib uses such techniques extensively.

Shipwreck answered 29/4, 2011 at 21:7 Comment(0)
R
2

In my opinion, comparing for equality (or some equivalence) is a requirement in most situations: standard C++ containers or algorithms with an implied equality comparison functor, like std::unordered_set for example, requires that this comparator be an equivalence relation (see C++ named requirements: UnorderedAssociativeContainer).

Unfortunately, comparing with an epsilon as in abs(a - b) < epsilon does not yield an equivalence relation since it loses transitivity. This is most probably undefined behavior, specifically two 'almost equal' floating point numbers could yield different hashes; this can put the unordered_set in an invalid state. Personally, I would use == for floating points most of the time, unless any kind of FPU computation would be involved on any operands. With containers and container algorithms, where only read/writes are involved, == (or any equivalence relation) is the safest.

abs(a - b) < epsilon is more or less a convergence criteria similar to a limit. I find this relation useful if I need to verify that a mathematical identity holds between two computations (for example PV = nRT, or distance = time * speed).

In short, use == if and only if no floating point computation occur; never use abs(a-b) < e as an equality predicate;

Recrimination answered 30/6, 2017 at 20:42 Comment(0)
G
1

Yes. 1/x will be valid unless x==0. You don't need an imprecise test here. 1/0.00000001 is perfectly fine. I can't think of any other case - you can't even check tan(x) for x==PI/2

Genova answered 14/1, 2011 at 10:4 Comment(3)
What about gradual underflow? did you try with double x=1.0e-320 and IEEE 754 machine?Elea
@aka.nice: IEEE754 is remarkably well defined. The range is slightly off-center, but unlike 2s complement, it has an extra positive value +1023. 1.0 / 2^-1022 is 2^1022, representable, and 1.0/2^1023 is 0, representable (and an underflow)Genova
I was talking 2^-1025 which is remarkably well defined (denormalized/gradual underflow) but will be inverted with an overflow, so the protection x==0 is generally not enough.Elea
G
1

The other posts show where it is appropriate. I think using bit-exact compares to avoid needless calculation is also okay..

Example:

float someFunction (float argument)
{
  // I really want bit-exact comparison here!
  if (argument != lastargument)
  {
    lastargument = argument;
    cachedValue = very_expensive_calculation (argument);
  }

  return cachedValue;
}
Glowworm answered 17/7, 2011 at 0:26 Comment(0)
M
0

I would say that comparing floats for equality would be OK if a false-negative answer is acceptable.

Assume for example, that you have a program that prints out floating points values to the screen and that if the floating point value happens to be exactly equal to M_PI, then you would like it to print out "pi" instead. If the value happens to deviate a tiny bit from the exact double representation of M_PI, it will print out a double value instead, which is equally valid, but a little less readable to the user.

Mallorie answered 17/2, 2012 at 13:49 Comment(0)
B
-3

I have a drawing program that fundamentally uses a floating point for its coordinate system since the user is allowed to work at any granularity/zoom. The thing they are drawing contains lines that can be bent at points created by them. When they drag one point on top of another they're merged.

In order to do "proper" floating point comparison I'd have to come up with some range within which to consider the points the same. Since the user can zoom in to infinity and work within that range and since I couldn't get anyone to commit to some sort of range, we just use '==' to see if the points are the same. Occasionally there'll be an issue where points that are supposed to be exactly the same are off by .000000000001 or something (especially around 0,0) but usually it works just fine. It's supposed to be hard to merge points without the snap turned on anyway...or at least that's how the original version worked.

It throws of the testing group occasionally but that's their problem :p

So anyway, there's an example of a possibly reasonable time to use '=='. The thing to note is that the decision is less about technical accuracy than about client wishes (or lack thereof) and convenience. It's not something that needs to be all that accurate anyway. So what if two points won't merge when you expect them to? It's not the end of the world and won't effect 'calculations'.

Banas answered 13/1, 2011 at 18:14 Comment(5)
The scale that you use to determine "sameness" should vary depending on the current zoom level. If the current pixel distance is 1.0 then maybe you snap within a margin of 2.5f, if it's 0.01, then you snap at 0.025... The way you have described it working right now would drive me crazy!Meganmeganthropus
@Meganmeganthropus - It's not what the client wants and the program works just fine. Your proposal would make the program nearly unusable.Banas
It could snap at the exact pixel location, since you're doing the comparison by the current pixel's "size" I was just illustrating that you could have done it in a way that made it always work instead of just the rare case when the float is exactly the same (almost never?)Meganmeganthropus
You don't know what you're talking about, Jumbo.Banas
This is exactly the reason why the maximum allowed zoom in a correctly thought application is usually capped to something realistic, which leave enough floating point accuracy !Angulate

© 2022 - 2024 — McMap. All rights reserved.