Machine epsilon vs least positive number
Asked Answered
M

3

5

What is the difference between machine epsilon and least positive number in floating point representation?

If I try to show the floating point number on a number line .Is the gap between exact 0 and the first positive (number which floating point can represent) ,and the gap between two successive numbers, different?

which one is generally smaller? and on which factor these two values depends(mantisa or exponent)?

Melanesian answered 10/10, 2014 at 15:31 Comment(0)
A
4

Machine epsilon is actually the relative error in a floating point number system representation. Using this you can find the absolute errors. How? see eg in IEEE754 you have 23 bit mantissa and 8 bit biased exponent . As per the definition of epsilon you can find it by putting all zero in exponent we get 2^-23 we now have least positive number for which 1+epsilon not equals 1

So to find absolute error of any range of number we just multiply it with the exponent of that number.

Whereas least number is the lowest number that a number representation can represent. eg all zeros in IEEE754 representation.

Both are different things...

Anecdotic answered 11/11, 2014 at 15:25 Comment(0)
E
4

For the IEEE 754 we have the binary64 with:

  • sign : 1 bit
  • exponent: 11 bits
  • significand : 53 bits

The smallest representable number depends on the exponent. For 11 bits we can go to aprox. 10^-323. Using python:

0.0 == 1e-323 # False
0.0 == 1e-324 # True

The machine epsilon depends on the sigficand and is related to relative rounding error. For 53 bits we have epsilon of aprox. 1e-15.

1.0 == 1.0 + 1e-15 # False
1.0 == 1.0 + 1e-16 # True
Expatiate answered 13/4, 2017 at 18:28 Comment(0)
V
0

7The most common definition of machine epsilon is the distance between 1.0 and the next representable number. For normal/normalised numbers the most significant bit of the significand is always 1, meaning the machine epsilon depends only on the number of bits in the significand. The standard 64-bit IEEE double has one implied and 52 actual bits there; flipping the least of them gives you an epsilon of 2^-52. That's also what you tend to get as DBL_EPSILON (float.h) since many compilers use the IEEE format.

The value of the least significant bit in the significand - and hence the difference between two successive floats - is sometimes known as an ulp (Knuth). Hence the machine epsilon is the ulp @ 1.0.

The smallest representable normalised double has a 1 bit before the radix point, the rest all zeroes, and the smallest possible (regular) exponent. Thus it depends only on the possible exponent range. For the standard double that's 2^-1022 (DBL_MIN).

Non-normalised (denormal/subnormal) values can get smaller. The smallest of them has a single lone 1 bit in the last position of the significand and the smallest possible exponent (which tends to be reserved for denormals and NaNs), and thus depends both on the exponent range and the number of mantissa bits. For the standard double that's 2^-1074.

The wiki article has nice diagrams, all the gory details, and links to the relevant standards.

The spacing between floats is regular as long as the exponent stays the same, and it doubles when the exponent increases.

For all standard double denormals it is 2^-1074, and that's the tightest spacing that you can get for that type (in absolute terms). From DBL_MIN onward it's 2^-1073, and so on.

From 2^53 on the spacing is 2.0, meaning you can use doubles as ersatz integers for counting on oddball platforms only up to 2^53 inclusive.

Vociferance answered 18/10, 2014 at 22:31 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.