Manipulating and comparing floating points in java
Asked Answered
A

9

10

In Java the floating point arithmetic is not represented precisely. For example this java code:

float a = 1.2; 
float b= 3.0;
float c = a * b; 
if(c == 3.6){
    System.out.println("c is 3.6");
} 
else {
    System.out.println("c is not 3.6");
} 

Prints "c is not 3.6".

I'm not interested in precision beyond 3 decimals (#.###). How can I deal with this problem to multiply floats and compare them reliably?

Aleece answered 24/5, 2010 at 9:42 Comment(3)
Declare floats like: float a = 1.2f; and doubles like double d = 1.2d; Also in your if-statement: if(c == 3.6f)Deranged
As addition to @bobah 's answer, I recommend to look at the Math.ulp() function.Lip
Use BigDecimal for float and double manipulations. See link.Boys
E
21

It's a general rule that floating point number should never be compared like (a==b), but rather like (Math.abs(a-b) < delta) where delta is a small number.

A floating point value having fixed number of digits in decimal form does not necessary have fixed number of digits in binary form.

Addition for clarity:

Though strict == comparison of floating point numbers has very little practical sense, the strict < and > comparison, on the contrary, is a valid use case (example - logic triggering when certain value exceeds threshold: (val > threshold) && panic();)

Exempt answered 24/5, 2010 at 9:51 Comment(11)
Recommending comparing using a tolerance is inappropriate advice because it decreases false reports of inequality at the expense of increasing false reports of equality, and you cannot know whether that is acceptable to an application you know nothing about. The application might be “more interested” in seeking inequality than seeking equality or might have other specifications it needs to meet.Unattached
@Eric - When working with floating point numbers there is no notion of identity or inequity, there is only a notion of distance. If in the formula I gave in the answer you replace < with > you will get a criteria for comparing floating point numbers for inequity in terms of distance. Bitwise identity of floating point numbers' representation in the computer memory is of no interest for most practical applicationsExempt
This is not about the bits representing a number; it is about their values. Floating-point arithmetic does have equality. The IEEE 754 standard defines floating-point objects to represent specific numbers exactly, not to represent intervals.Unattached
And a real life example is?Exempt
A real life example of what?Unattached
a real life example when you need to compare double precision floating point numbers with ==, I do not know any (apart from CPU manufacturer tests and alike)Exempt
You are examining a damped oscillator and want to distinguish underdamping, overdamping, and critical damping. This requires a strict test, with no tolerance. Allowing a tolerance would lead to taking the square root of a negative number. However, in spite of this example, your request is a straw man. Advising not to compare with a tolerance does not imply comparing for exact equality, because there are other options. For example, one possibility is to avoid using a comparison at all; just report the best result available without attempting to force it to a quantized result.Unattached
Regardless of any examples, there is a fundamental problem in advising people to compare using a tolerance. It increases false reports of equality, and, because you do not know the application, you cannot know whether this is acceptable or is a problem.Unattached
let us continue this discussion in chatExempt
Accurate floating point comparison requires profound understanding of the IEEE754 standard. A good tutorial for that is at randomascii.wordpress.com/2012/02/25/…Metaphysics
"accurate comparison" - is a meaningless term, it cannot be quantified. I think I know IEEE754 well, the answer I gave precisely answers the question of the topic, it is compact and unambiguous. Your comment, on the contrary, is so general that it is almost an offtopic.Exempt
C
7

If you are interested in fixed precision numbers, you should be using a fixed precision type like BigDecimal, not an inherently approximate (though high precision) type like float. There are numerous similar questions on Stack Overflow that go into this in more detail, across many languages.

Cheviot answered 24/5, 2010 at 9:43 Comment(0)
H
7

I think it has nothing to do with Java, it happens on any IEEE 754 floating point number. It is because of the nature of floating point representation. Any languages that use the IEEE 754 format will encounter the same problem.

As suggested by David above, you should use the method abs of java.lang.Math class to get the absolute value (drop the positive/negative sign).

You can read this: http://en.wikipedia.org/wiki/IEEE_754_revision and also a good numerical methods text book will address the problem sufficiently.

public static void main(String[] args) {
    float a = 1.2f;
    float b = 3.0f;
    float c = a * b;
        final float PRECISION_LEVEL = 0.001f;
    if(Math.abs(c - 3.6f) < PRECISION_LEVEL) {
        System.out.println("c is 3.6");
    } else {
        System.out.println("c is not 3.6");
    }
}
Hitormiss answered 24/5, 2010 at 10:11 Comment(0)
O
3

I’m using this bit of code in unit tests to compare if the outcome of 2 different calculations are the same, barring floating point math errors.

It works by looking at the binary representation of the floating point number. Most of the complication is due to the fact that the sign of floating point numbers is not two’s complement. After compensating for that it basically comes down to just a simple subtraction to get the difference in ULPs (explained in the comment below).

/**
 * Compare two floating points for equality within a margin of error.
 * 
 * This can be used to compensate for inequality caused by accumulated
 * floating point math errors.
 * 
 * The error margin is specified in ULPs (units of least precision).
 * A one-ULP difference means there are no representable floats in between.
 * E.g. 0f and 1.4e-45f are one ULP apart. So are -6.1340704f and -6.13407f.
 * Depending on the number of calculations involved, typically a margin of
 * 1-5 ULPs should be enough.
 * 
 * @param expected The expected value.
 * @param actual The actual value.
 * @param maxUlps The maximum difference in ULPs.
 * @return Whether they are equal or not.
 */
public static boolean compareFloatEquals(float expected, float actual, int maxUlps) {
    int expectedBits = Float.floatToIntBits(expected) < 0 ? 0x80000000 - Float.floatToIntBits(expected) : Float.floatToIntBits(expected);
    int actualBits = Float.floatToIntBits(actual) < 0 ? 0x80000000 - Float.floatToIntBits(actual) : Float.floatToIntBits(actual);
    int difference = expectedBits > actualBits ? expectedBits - actualBits : actualBits - expectedBits;

    return !Float.isNaN(expected) && !Float.isNaN(actual) && difference <= maxUlps;
}

Here is a version for double precision floats:

/**
 * Compare two double precision floats for equality within a margin of error.
 * 
 * @param expected The expected value.
 * @param actual The actual value.
 * @param maxUlps The maximum difference in ULPs.
 * @return Whether they are equal or not.
 * @see Utils#compareFloatEquals(float, float, int)
 */
public static boolean compareDoubleEquals(double expected, double actual, long maxUlps) {
    long expectedBits = Double.doubleToLongBits(expected) < 0 ? 0x8000000000000000L - Double.doubleToLongBits(expected) : Double.doubleToLongBits(expected);
    long actualBits = Double.doubleToLongBits(actual) < 0 ? 0x8000000000000000L - Double.doubleToLongBits(actual) : Double.doubleToLongBits(actual);
    long difference = expectedBits > actualBits ? expectedBits - actualBits : actualBits - expectedBits;

    return !Double.isNaN(expected) && !Double.isNaN(actual) && difference <= maxUlps;
}
Overfeed answered 26/7, 2013 at 1:20 Comment(1)
You may also consider using Float.floatToRawIntBits(), checking for NaN at the beginning of your method. In fact, floatToIntBits() does nothing but checking the result for NaN, replacing it with pre-defined integer value of 0x7fc00000. The main reason for doing such a thing is a fact that floatToIntBits() actually calls floatToRawIntBits(), making it slower to execute.The other approach is to check the converted bits for 0x7fc00000, but you don't need both checks.Clishmaclaver
S
2

This is a weakness of all floating point representations, and it happens because some numbers that appear to have a fixed number of decimals in the decimal system, actually have an infinite number of decimals in the binary system. And so what you think is 1.2 is actually something like 1.199999999997 because when representing it in binary it has to chop off the decimals after a certain number, and you lose some precision. Then multiplying it by 3 actually gives 3.5999999...

http://docs.python.org/py3k/tutorial/floatingpoint.html <- this might explain it better (even if it's for python, it's a common problem of the floating point representation)

Singly answered 24/5, 2010 at 10:14 Comment(1)
+1 - all finite precision floating number systems suffer from this problem. Not matter what base you choose, some rationals cannot be represented exactly.Edelsten
D
2

Like the others wrote:

Compare floats with: if (Math.abs(a - b) < delta)

You can write a nice method for doing this:

public static int compareFloats(float f1, float f2, float delta)
{
    if (Math.abs(f1 - f2) < delta)
    {
         return 0;
    } else
    {
        if (f1 < f2)
        {
            return -1;
        } else {
            return 1;
        }
    }
}

/**
 * Uses <code>0.001f</code> for delta.
 */
public static int compareFloats(float f1, float f2)
{
     return compareFloats(f1, f2, 0.001f);
}

So, you can use it like this:

if (compareFloats(a * b, 3.6f) == 0)
{
    System.out.println("They are equal");
}
else
{
    System.out.println("They aren't equal");
}
Deranged answered 24/5, 2010 at 10:46 Comment(0)
N
2

There is an apache class for comparing doubles: org.apache.commons.math3.util.Precision

It contains some interesting constants: SAFE_MIN and EPSILON, which are the maximum possible deviations when performing arithmetic operations.

It also provides the necessary methods to compare, equal or round doubles.

Narbada answered 19/5, 2015 at 11:24 Comment(0)
B
0

Rounding is a bad idea. Use BigDecimal and set it's precision as needed. Like:

public static void main(String... args) {
    float a = 1.2f;
    float b = 3.0f;
    float c = a * b;
    BigDecimal a2 = BigDecimal.valueOf(a);
    BigDecimal b2 = BigDecimal.valueOf(b);
    BigDecimal c2 = a2.multiply(b2);
    BigDecimal a3 = a2.setScale(2, RoundingMode.HALF_UP);
    BigDecimal b3 = b2.setScale(2, RoundingMode.HALF_UP);
    BigDecimal c3 = a3.multiply(b3);
    BigDecimal c4 = a3.multiply(b3).setScale(2, RoundingMode.HALF_UP);

    System.out.println(c); // 3.6000001
    System.out.println(c2); // 3.60000014305114740
    System.out.println(c3); // 3.6000
    System.out.println(c == 3.6f); // false
    System.out.println(Float.compare(c, 3.6f) == 0); // false
    System.out.println(c2.compareTo(BigDecimal.valueOf(3.6f)) == 0); // false
    System.out.println(c3.compareTo(BigDecimal.valueOf(3.6f)) == 0); // false
    System.out.println(c3.compareTo(BigDecimal.valueOf(3.6f).setScale(2, RoundingMode.HALF_UP)) == 0); // true
    System.out.println(c3.compareTo(BigDecimal.valueOf(3.6f).setScale(9, RoundingMode.HALF_UP)) == 0); // false
    System.out.println(c4.compareTo(BigDecimal.valueOf(3.6f).setScale(2, RoundingMode.HALF_UP)) == 0); // true
}
Boys answered 20/11, 2019 at 13:21 Comment(0)
D
-1

To compare two floats, f1 and f2 within precision of #.### I believe you would need to do like this:

((int) (f1 * 1000 + 0.5)) == ((int) (f2 * 1000 + 0.5))

f1 * 1000 lifts 3.14159265... to 3141.59265, + 0.5 results in 3142.09265 and the (int) chops off the decimals, 3142. That is, it includes 3 decimals and rounds the last digit properly.

Dangle answered 24/5, 2010 at 10:12 Comment(2)
Comparing using an epsilon is better: consider what happens if f1 == 3.1414999999999 and f2 == 3.1415000000001.Contravallation
Shit. I though I had it :-) sure. I agree with you. Comparing using an epsilon is much better. But does it accurately compare two floats of to its 3 first decimals?Dangle

© 2022 - 2024 — McMap. All rights reserved.