How do I compare numbers of the form a + b*sqrt(c) without intermediate integers getting larger and larger?
Asked Answered
O

1

8

I'm working on an application for solving quadratic constraints in 2D Euclidean geometry involving circles and lines (Constructable Numbers) and representing the results graphically. I've found this paper on representing such problems in a binary tree, but I'm running into an implementation issue:

I need to compare numbers of the form a + b*sqrt(c) for the standard relational operations of less than, greater than, and equality. The values of c for my application are restricted to 2, 3, 5, 6, 10, 15, or 30. For example (C-like pseudocode, ^ is "to the power of" operator):

boolean CompareConstructibleNumbers(int a1, b1, c1, a2, b2, c2)
{
    return a1plusb1sqrtc1_is_greater_than_a2plusb2sqrtc2 = 
        4 * ((a1 - a2)^2) * (b1^2) * c1
          > ((b2^2) * c2 - (b1^2) * c1 - (a1 - a2)^2)^2;
        // Not sure if I have the direction right for >
}

This naive implementation requires me to multiply many times, so 32-bit integers become 64-bit integers, and then 128 bit, etc. I don't want to use a custom BigInteger in my implementation solely to store a temporary value used only for comparison.

I also would prefer not to use floats and want to avoid the risk of round-off error in trying to directly calculate sqrt(c) as a float. I need exact computation for this application, not necessarily infinite precision, but I want to avoid round-off error and ensure correct results.

How can I go about comparing constructable numbers of the form a + b * sqrt(c) without requiring huge intermediate integer values? My initial values for a and b are in the 32-bit range.

****UPDATE**** Thanks everyone for their responses. Following up on the suggestion to pursue continued fractions, I was able to use the Fundamental Recurrence Formulas to generate arbitrarily precise approximations for square roots.

I also found this paper which discusses error accumulation from multiplication of approximations of real numbers with fixed-point representations by integers. Luckily, the integer part of all the square roots I'm interested in is at most 6 (needing only 3 bits), so I have a lot of bits available to represent the fractional part for an approximation. For multiplication by a 32-bit integer, I need a minimum fixed-point approximation of Q3.32 bits to keep the error to 1 bit after multiplication.

So while a 53-bit precision double is sufficient to store an accurate enough approximation for a square root, it is not sufficient to store the result after multiplication by a 32-bit integer, as that requires a minimum of 67-bits of precision to minimize error. Using a fixed-point representation in a 64-bit integer (say Q16.48) for c and a 32-bit integer b, I plan to use multi-word arithmetic with 96 or 128 bit numbers to do the comparison without enough error to throw off the results. I'm confident that this will be enough accuracy for comparing constructable numbers that use only these 7 square roots, but I'm interested in second opinions. Any thoughts?

Oruro answered 7/1, 2013 at 3:47 Comment(9)
a and b are signed integers, yes?Interdigitate
Yes. a and b are signed integers.Oruro
Precompute the square root and use a double. It has 53 bits of mantissa, so you're unlikely to have any round-off error. I mean, floating-point numbers were designed specifically for this, where going to great lengths to keep calculations in integers just isn't worth it.Shastashastra
@Thomas: Are you sure the ratio of two 32-bit numbers cannot possibly be within 2^(-53) of sqrt(2), sqrt(3), sqrt(5), etc.? Can you prove it? If not, using a double runs the risk of giving the wrong answer... Consider comparing a - b*sqrt(3) versus zero if b/a is very very close to sqrt(3). Can you prove 53 bits of precision for sqrt(3) never gives the wrong answer here? And similarly for every possible comparison being asked about? Very dangerous to use double here, IMO.Treadwell
Put the question on math.stackexchange.com.Polyphony
If efficiency is a concern, do the calculation in double first and go to the integer formula only if the result is questionable (i.e. the two numbers are almost equal).Salish
@JoopEggen, I'd considered posting on Math.StackExchange, but I feel this question is more about implementation in a program than mathematics itself. And StackOverflow does have a tag for math-related programming issues.Oruro
No critic intended. thought you might not have been aware of that sibling site.Polyphony
None taken. I appreciate the tip.Oruro
C
3

I don't think there's a formula that lets you stay within 64 bits for exact comparison assuming your values use the full 32 bits. The problem as I see it is that numbers of the form a+b*sqrt(c) are dense in the reals (assuming c is not a square), so you get very subtle comparisons that need lots of precision. Therefore, you essentially need to get rid of the square roots by squaring, which will use 3 multiplications.

A BigInt implementation in this case is not actually so bad, since you only need to implement multiplication, addition, subtraction and comparison. These can be implemented efficiently in very few lines of code. Typically, it's division that is annoying to implement. In addition, you know that your numbers can never overflow an array with two 64 bit cells, so you don't actually need to keep track of the number of bits in the products.

EDIT: Regarding the use of doubles suggested by Thomas and Nemo's comment on that, it's actually very easy to find an approximation using 32 bit ints well within 2^{-53} of sqrt(2) by using the continued fractions representation.

Crummy answered 7/1, 2013 at 5:37 Comment(7)
You are probably right (+1). But I wonder if there could be some trick, given the limited set of values for c. For example, if we were only interested in a+b*sqrt(2) with a,b non-negative, we could use a "base sqrt(2)" representation to allow fast comparisons in 64 bits. Maybe some clever extension of that could be used to dodge a full BigInt implementation.Treadwell
Re: Continued fractions. That is actually what I had in mind :-)Treadwell
@Treadwell and Edvard, I had considered continued fractions, and the main problem I have with them is that since they are a recursive process (at least Gosper's implementation is), I"m not sure what kind of a time bound I'll have on receiving an answer, and knowing that is important for my application. If I knew a definite upper bound on time (e.g. no more than n iterations) then I'd be a lot happier with a CF approach.Oruro
@Nemo, The problem with base sqrt(2) is that it doesn't work for sqrt(3), sqrt(5), etc. (Golden ratio base is really fun though.)Oruro
I think the actual limit on size is three 64-bit cells, not two, unless I'm missing something, but the point that BigInt may not be so bad stands. I'll consider this, but I was hoping that @Treadwell is essentially right that there may be some clever work-around given the limited values of c.Oruro
I was a bit tired when I wrote this. You are squaring twice and doing one multiplication, so you need three 64-bit cells.Crummy
I'm wondering if @Shastashastra might be on to something with his suggestion to store an approximation (not necessarily as a float). Continued fractions are essentially better and better rational estimates of a value. Since we're only dealing with 7 distinct square roots, would a 32-bit numerator and 32-bit denominator be sufficiently precise to guarantee correct results despite only being an approximation? I know constructable numbers are dense in the reals, but we're not dealing with all possible values of c either. Any ideas on how to prove a given approximation is sufficiently precise?Oruro

© 2022 - 2024 — McMap. All rights reserved.