sqrt, perfect squares and floating point errors
Asked Answered
S

4

9

In the sqrt function of most languages (though here I'm mostly interested in C and Haskell), are there any guarantees that the square root of a perfect square will be returned exactly? For example, if I do sqrt(81.0) == 9.0, is that safe or is there a chance that sqrt will return 8.999999998 or 9.00000003?

If numerical precision is not guaranteed, what would be the preferred way to check that a number is a perfect square? Take the square root, get the floor and the ceiling and make sure they square back to the original number?

Thank you!

Subfamily answered 4/1, 2013 at 5:45 Comment(8)
Yes,Numeric precision is not guaranteed.Abut
the technique you are looking for is Abs(x - y) < epsilon, where epsilon is a small constant, say, 0.0000001. Also, this question has been asked many times, in many forms previously...Gettysburg
What every computer scientist should know about floating point arithmeticDoughnut
@MitchWheat: It's well known that given imprecise numbers, FP operations will yield imprecise results. I think the OP is specifically talking about precise numbers that can be represented exactly, even in FP. His question is more like "given that f is a precise number, will a specific operation yield a precise result?"Hipster
@Gabe; is 9.0 precise? Or is it 8.9999999999....xxxGettysburg
@Mitch: 9.0 is an integer so it can be precisely represented as a binary floating point number in the form 1.001*2^3, and being of magnitude smaller than the mantissa of all IEEE formats, is represented exactly in any format (0x41100000 as a single or 0x4022000000 as a double).Hipster
If there exists an exact square root for some number x.xxxx it should be of form y.yy -- halving the number of digits.Whooper
@AlokSave Superstition! Square root is one of the basic IEEE 754 and must be correctly rounded. See tmyklebu's answer.Nebo
I
12

In IEEE 754 floating-point, if the double-precision value x is the square of a nonnegative representable number y (i.e. y*y == x and the computation of y*y does not involve any rounding, overflow, or underflow), then sqrt(x) will return y.

This is all because sqrt is required to be correctly-rounded by the IEEE 754 standard. That is, sqrt(x), for any x, will be the closest double to the actual square root of x. That sqrt works for perfect squares is a simple corollary of this fact.

If you want to check whether a floating-point number is a perfect square, here's the simplest code I can think of:

int issquare(double d) {
  if (signbit(d)) return false;
  feclearexcept(FE_INEXACT);
  double dd = sqrt(d);
  asm volatile("" : "+x"(dd));
  return !fetestexcept(FE_INEXACT);
}

I need the empty asm volatile block that depends on dd because otherwise your compiler might be clever and "optimise" away the calculation of dd.

I used a couple of weird functions from fenv.h, namely feclearexcept and fetestexcept. It's probably a good idea to look at their man pages.

Another strategy that you might be able to make work is to compute the square root, check whether it has set bits in the low 26 bits of the mantissa, and complain if it does. I try this approach below.

And I needed to check whether d is zero because otherwise it can return true for -0.0.

EDIT: Eric Postpischil suggested that hacking around with the mantissa might be better. Given that the above issquare doesn't work in another popular compiler, clang, I tend to agree. I think the following code works:

int _issquare2(double d) {
  if (signbit(d)) return 0;
  int foo;
  double s = sqrt(d);
  double a = frexp(s, &foo);
  frexp(d, &foo);
  if (foo & 1) {
    return (a + 33554432.0) - 33554432.0 == a && s*s == d;
  } else {
    return (a + 67108864.0) - 67108864.0 == a;
  }
}

Adding and subtracting 67108864.0 from a has the effect of wiping the low 26 bits of the mantissa. We will get a back exactly when those bits were clear in the first place.

Intubate answered 4/1, 2013 at 7:28 Comment(9)
A number of requirements must be satisfied before that code can be said to implement “Is d a square?”. You need a binding from C to IEEE 754 or other suitable floating-point specification, which the C standard does not require and many C implementations do not provide. You must include <fenv.h> and set FENV_ACCESS to on with the appropriate #pragma or know it is on for the C implementation being used. 1/d raises an exception, which can be avoided by using signbit(d) instead. And, of course, the C implementation must support the asm extension.Winton
Accessing the floating-point environment with feclearexcept and fetestexcept may be very slow on some processors. The strategy to check the low bits of the significand (not mantissa) would be faster, since this can be done with ordinary arithmetic. Overflow, underflow, and subnormals would not be a concern since the square root is far from the extremes of the floating-point range. However, I would have some concern about the precision used. It is not clear to me that C Annex F prevents an implementation from using greater precision than the nominal types, as is permitted without Annex F.Winton
Should not the asm be unnecessary? There is a sequence point after double dd = sqrt(d);, so all side effects of that should be complete before fetestexcept is called.Winton
The asm block isn't for sequencing; I'm trying to tell the copmiler that dd isn't dead and that it's best not to optimise it away. Whether or not I've done #pragma STDC FENV_ACCESS ON. Both gcc nor clang seem to treat sqrt as a pure function with no side effects. (My copy of clang doesn't even care that there's an asm block depending on dd; it elides its computation completely. I guess you do need to do the bit-fiddling if you want a portable implementation. Sigh.)Intubate
Ah. In that case, a proper implementation will not optimize away the initialization of dd, because it contains a side effect that fetestexcept accesses. An improper implementation might, but then the asm is only a hack for specific improper implementations.Winton
You do not need to use frexp to examine the significand (not mantissa). Dekker’s algorithm will split it: d = x * (0x1p26 + 1); x0 = d - (d-x); x1 = x-x0; produces the low 26 bits in x1 and the high bits in x0. Round-to-nearest mode is required. This uses only plain arithmetic (+, -, *). frexp might also be slow. In your code a = sqrt(d); b = a*(0x1p26+1); return a == b-(b-a); should work.Winton
However, this fails testing 67108863. The square root is returned as 0x1.ffffffcp+12, which has the low 26 bits clear but is inexact. So the fact that the low 26 bits of the evaluated sqrt(x) are clear does not guarantee that the square root is exact.Winton
@EricPostpischil: Yup. If the mantissa of the square root is bigger than sqrt(2), the exponent of the product will be one higher than I'd want it to be. The first piece of code I posted was wrong in this way too; I posted it because my testing sucked. Seems like I need to switch on the parity of the exponent of d in order to make this work. I'll try using your * (1p26+1) trick and post another edit if I can make it work.Intubate
Oh too bad, I didn't see your edit before posting an answer... One of my test in Squeak Smalltalk 2r111010101010101010101010101 squared asFloat sqrt = 2r111010101010101010101010101 but 2r111010101010101010101010101 squared asFloat ~= 2r111010101010101010101010101 squaredCerebrate
H
7

According to this paper, which discusses proving the correctness of IEEE floating-point square root:

The IEEE-754 Standard for Binary Floating-Point Arithmetic [1] requires that the result of a divide or square root operation be calculated as if in infinite precision, and then rounded to one of the two nearest floating-point numbers of the specified precision that surround the infinitely precise result

Since a perfect square that can be represented exactly in floating-point is an integer and its square root is an integer that can be precisely represented, the square root of a perfect square should always be exactly correct.

Of course, there's no guarantee that your code will execute with a conforming IEEE floating-point library.

Hipster answered 4/1, 2013 at 7:32 Comment(1)
Assuming the sqrt routine is implemented similarly to long division (repeated shift and subtract), the algorithm ends with zero remainder and produces exact results.Whooper
C
1

@tmyklebu perfectly answered the question. As a complement, let's see a possibly less efficient alternative for testing perfect square of fractions without asm directive.

Let's suppose we have an IEEE 754 compliant sqrt which rounds the result correctly.
Let's suppose exceptional values (Inf/Nan) and zeros (+/-) are already handled.
Let's decompose sqrt(x) into I*2^m where I is an odd integer.
And where I spans n bits: 1+2^(n-1) <= I < 2^n.

If n > 1+floor(p/2) where p is floating point precision (e.g. p=53 and n>27 in double precision)
Then 2^(2n-2) < I^2 < 2^2n.
As I is odd, I^2 is odd too and thus spans over > p bits.
Thus I is not the exact square root of any representable floating point with this precision.

But given I^2<2^p, could we say that x was a perfect square?
The answer is obviously no. A taylor expansion would give

sqrt(I^2+e)=I*(1+e/2I - e^2/4I^2 + O(e^3/I^3))

Thus, for e=ulp(I^2) up to sqrt(ulp(I^2)) the square root is correctly rounded to rsqrt(I^2+e)=I... (round to nearest even or truncate or floor mode).

Thus we would have to assert that sqrt(x)*sqrt(x) == x.
But above test is not sufficient, for example, assuming IEEE 754 double precision, sqrt(1.0e200)*sqrt(1.0e200)=1.0e200, where 1.0e200 is exactly 99999999999999996973312221251036165947450327545502362648241750950346848435554075534196338404706251868027512415973882408182135734368278484639385041047239877871023591066789981811181813306167128854888448 whose first prime factor is 2^613, hardly a perfect square of any fraction...

So we can combine both tests:

#include <float.h>
bool is_perfect_square(double x) {
    return sqrt(x)*sqrt(x) == x
        && squared_significand_fits_in_precision(sqrt(x));
}
bool squared_significand_fits_in_precision(double x) {
    double scaled=scalb( x , DBL_MANT_DIG/2-ilogb(x));
    return scaled == floor(scaled)
        && (scalb(scaled,-1)==floor(scalb(scaled,-1)) /* scaled is even */
            || scaled < scalb( sqrt((double) FLT_RADIX) , DBL_MANT_DIG/2 + 1));
}

EDIT: If we want to restrict to the case of integers, we can also check that floor(sqrt(x))==sqrt(x) or use dirty bit hacks in squared_significand_fits_in_precision...

Cerebrate answered 4/1, 2013 at 23:14 Comment(2)
Damn. Wish I knew about ilogb before.Intubate
unfortunately, while available in good libraries, ilogb and scalb are not in C standards until C++11... So i cheated a bit.Cerebrate
P
0

Instead of doing sqrt(81.0) == 9.0, try 9.0*9.0 == 81.0. This will always work as long as the square is within the limits of the floating point magnitude.

Edit: I was probably unclear about what I meant by "floating point magnitude". What I mean is to keep the number within the range of integer values that can be held without precision loss, less than 2**53 for a IEEE double. I also expected that there would be a separate operation to make sure the square root was an integer.

double root = floor(sqrt(x) + 0.5);  /* rounded result to nearest integer */
if (root*root == x && x < 9007199254740992.0)
    /* it's a perfect square */
Purgatory answered 4/1, 2013 at 5:50 Comment(4)
It fails when you substitute infinity for 81.0 and 1e300 for 9.0. (At least for doubles.) Or leave 81.0 alone and use -9.0 for 9.0.Intubate
@EricPostpischil, I see this answer has caused some confusion. Hopefully I've clarified it.Purgatory
but scalb( FLT_RADIX , 2*DBL_MANT_DIG) is a perfect square too, isn't it?Cerebrate
Indeed, it is not, because it's an odd power of 2. Probably you meant scalbn(1.0, 2*DBL_MANT_DIG).Grettagreuze

© 2022 - 2024 — McMap. All rights reserved.