Welcome to the world of traps, snares and loopholes. As mentioned elsewhere, a general purpose solution for floating point equality and tolerances does not exist. Given that, there are tools and axioms that a programmer may use in select cases.
fabs(a_float - b_float) < tol
has the shortcoming OP mentioned: "does not work well for the general case where a_float might be very small or might be very large." fabs(a_float - ref_float) <= fabs(ref_float * tol)
copes with the variant ranges much better.
OP's "single precision floating point number is use tol = 10E-6" is a bit worrisome for C and C++ so easily promote float
arithmetic to double
and then it's the "tolerance" of double
, not float
, that comes into play. Consider float f = 1.0; printf("%.20f\n", f/7.0);
So many new programmers do not realize that the 7.0
caused a double
precision calculation. Recommend using double
though out your code except where large amounts of data need the float
smaller size.
C99 provides nextafter()
which can be useful in helping to gauge "tolerance". Using it, one can determine the next representable number. This will help with the OP "... the full number of significant digits for the storage type minus one ... to allow for roundoff error." if ((nextafter(x, -INF) <= y && (y <= nextafter(x, +INF))) ...
The kind of tol
or "tolerance" used is often the crux of the matter. Most often (IMHO) a relative tolerance is important. e. g. "Are x and y within 0.0001%"? Sometimes an absolute tolerance is needed. e.g. "Are x and y within 0.0001"?
The value of the tolerance is often debatable for the best value is often situation dependent. Comparing within 0.01 may work for a financial application for Dollars but not Yen. (Hint: be sure to use a coding style that allows easy updates.)
tol
correctly for all general cases? A: One doesn't. This kind of comparison is not suitable for all cases, regardless of tolerance value (and FWIW, wouldn't you know best what the appropriate tolerance is for the thing you are testing?) – Taveras