For doing some testing of a system where lots of floating arithmetic is involved, I have defined a deviation range of floating arithmetic error, so if the difference between two floats lies within the deviation range they are considered as mathematically equal:
;;; Floating Error Agnostic =
;;; F2 is = to F1 within the deviation range +-DEV
(defparameter *flerag-devi* .0005
"Allowed deviation range, within which two floats
should be considered as mathematically equal.")
(defun flerag= (f1 f2 &optional (devi *flerag-devi*))
""
(<= (abs (- f1 f2)) devi))
Now when I test the function by comparing a float with a random fraction added to it, the responses are (at least as for my testings of the function) positive, e.g.:
(loop repeat 100000000
with f = 1.0
always (flerag= f (+ f (random *flerag-devi*)))) ;T
This also is the case when I subtract a random fraction from the original float, e.g.:
(loop repeat 100000000
with f = 19.0
always (flerag= f (- f (random *flerag-devi*)))) ;T
(loop repeat 100000000
with f = 3
always (flerag= f (- f (random *flerag-devi*)))) ;T
However when I set the original float to be 1.0 (or 1):
(loop repeat 100000000
with f = 1.0
always (flerag= f (- f (random *flerag-devi*)))) ;NIL
it always returns NIL, which seems to me more strange as it returns NIL immediately after evaluation (in other cases it took some seconds to compute the 100000000 times). Until now this has happened only with f = 1.0
and no other numbers, regardless of addition or subtraction of the fraction to them. Can any one reproduce this behavior on his/her Lisp too? Any explanation and help would be appreciated.
Update
The case with f = 1.0
also works as with the other ones when I use a double-float 1, i.e. 1d0 or by doing:
(setf *read-default-float-format* 'double-float)