Intro:
With Java floats, I noticed that when you add 1.0 to a certain range of tiny negative numbers, it equals 1.0. I decided to investigate this and learned a lot about how floats work in my quest to understand. But I ran into a weird wall. I've found that the bit representations of floats make all the math the clearest, so I'll be using that.
tl;dr, it seems like the mantissa has 24 bits of precision (not including the leading implicit 1) when adding/subtracting instead of the expected 23. Or so it seems given the math and the code outputs.
When you take 0b1_01100110_00000000000000000000000
(-1×2-25×1.0) and add 0b0_01111111_00000000000000000000000
(1×20×1.0 or the float bits for 1.0), the answer ends up being 1.0. The former is the negative float I found that results in this strange answer where the tiniest amount smaller Math.nextDown()
doesn't (which is 0b1_01100110_00000000000000000000001
btw) The range of numbers from that one all the way up to -0.0f behaves like this.
The math:
For this special number 0b1_01100110_00000000000000000000000
, the exponent -25 is the smaller one, so add 25 to match 1.0's 0, and also shift the mantissa to the right by 25 places. We end up with an exponent of 01111111
and a mantissa of 0.00000000000000000000000[01]
. The implicit 1 has moved to the right 25 times so I'm showing that it's now 0. Since it can only be 23 digits long, the portion in the []'s is lost. So the new mantissa is truncated to 0.00000000000000000000000
. Now, when we do 1.00000000000000000000000
(1.0's mantissa) minus 0.00000000000000000000000
(the new mantissa of the special number), you simply get 1.0's mantissa (1-0=1). So put it all together and you get 0b0_01111111_00000000000000000000000
, which is just 1.0.
This loss of information from precision limitations explains why this special number is treated like nothing. But something strange happens when we try another number.
Enter 0b1_01100111_00000000000000000000000
(-1×2-24×1.0). This is a very similar number except the exponent is -24 now. Same process when we add 1.0. Add 24 to the exponent so it matches 1.0's and we end up with 01111111
. Also shift the mantissa to the right by 24, and we end up with 0.00000000000000000000000[1]
. Here, I would expect the 1 at the end to be dropped since there are already 23 0's, but when you actually run the code, it doesn't seem like it is.
If we continue the math without truncating the mantissa, 1.00000000000000000000000[0]
- 0.00000000000000000000000[1]
= 0.11111111111111111111[1]
. And since the implicit part has to be 1, we shift everything over to the left by 1 giving us 1.11111111111111111111[0]
. We also subtract 1 from the exponent giving 01111110
or -1. In other words, it's normalized. The result is 0b0_01111110_11111111111111111111111
which is exactly what the code gives.
The question:
Why then does the code behave as if the mantissa is 24 bits long, when it's normally represented with 23?
Some helpful code to visualize things:
// to visualize the float bits
private String floatToBinaryString(float value) {
String binaryString = String.format("%32s", Integer.toBinaryString(Float.floatToIntBits(value))).replace(' ', '0');
return "0b" + binaryString.charAt(0) + "_" + binaryString.substring(1, 9) + "_" + binaryString.substring(9);
}
// The 2^-25 number + 1.0 outputs 0b0_01111111_00000000000000000000000 or 1.0
System.out.println(floatToBinaryString(
Float.intBitsToFloat(0b1_01100110_00000000000000000000000)
+ Float.intBitsToFloat(0b0_01111111_00000000000000000000000)));
// The 2^-24 number + 1.0 outputs 0b0_01111110_11111111111111111111111 or 0.99999994
System.out.println(floatToBinaryString(
Float.intBitsToFloat(0b1_01100111_00000000000000000000000)
+ Float.intBitsToFloat(0b0_01111111_00000000000000000000000)));