I'm working on some tool that gets to compute numbers that can get close to 1e-25
in the worst cases, and compare them together, in Java. I'm obviously using double precision.
I have read in another answer that I shouldn't expect more than 1e-15
to 1e-17
precision, and this other question deals with getting better precision when ordering operations in a "better" order.
Which double precision operations are more keen to loose precision along the way? Should I try to work with number as big as possible or as small as possible? Do divisions first before multiplications?
I'd rather not use the BigDecimal
classes or equivalent, as the code is already slow enough ;) (unless they don't impact speed too much, of course).
Any information will be greatly appreciated!
EDIT: The fact that numbers are "small" in absolute value (1e-25) does not matter, as double can go down to 1e-324. But what matters is that, when they are very similar (both in 1e-25), I have to compare, let's say 4.64563824048517606458e-21 to 4.64563824048517606472e-21 (difference is the 19th and 20th digits). When computing these numbers, the difference is so small that I might hit the "rounding error", where remainder is filled with random numbers.
The question is: "how to order computation so that this loss of precision is minimized?". It might be doing divisions before multiplications, or additions first.