I'm working on an application for solving quadratic constraints in 2D Euclidean geometry involving circles and lines (Constructable Numbers) and representing the results graphically. I've found this paper on representing such problems in a binary tree, but I'm running into an implementation issue:
I need to compare numbers of the form a + b*sqrt(c)
for the standard relational operations of less than, greater than, and equality. The values of c
for my application are restricted to 2
, 3
, 5
, 6
, 10
, 15
, or 30
. For example (C-like pseudocode, ^
is "to the power of" operator):
boolean CompareConstructibleNumbers(int a1, b1, c1, a2, b2, c2)
{
return a1plusb1sqrtc1_is_greater_than_a2plusb2sqrtc2 =
4 * ((a1 - a2)^2) * (b1^2) * c1
> ((b2^2) * c2 - (b1^2) * c1 - (a1 - a2)^2)^2;
// Not sure if I have the direction right for >
}
This naive implementation requires me to multiply many times, so 32-bit integers become 64-bit integers, and then 128 bit, etc. I don't want to use a custom BigInteger in my implementation solely to store a temporary value used only for comparison.
I also would prefer not to use floats and want to avoid the risk of round-off error in trying to directly calculate sqrt(c)
as a float. I need exact computation for this application, not necessarily infinite precision, but I want to avoid round-off error and ensure correct results.
How can I go about comparing constructable numbers of the form a + b * sqrt(c)
without requiring huge intermediate integer values? My initial values for a
and b
are in the 32-bit range.
****UPDATE**** Thanks everyone for their responses. Following up on the suggestion to pursue continued fractions, I was able to use the Fundamental Recurrence Formulas to generate arbitrarily precise approximations for square roots.
I also found this paper which discusses error accumulation from multiplication of approximations of real numbers with fixed-point representations by integers. Luckily, the integer part of all the square roots I'm interested in is at most 6 (needing only 3 bits), so I have a lot of bits available to represent the fractional part for an approximation. For multiplication by a 32-bit integer, I need a minimum fixed-point approximation of Q3.32 bits to keep the error to 1 bit after multiplication.
So while a 53-bit precision double
is sufficient to store an accurate enough approximation for a square root, it is not sufficient to store the result after multiplication by a 32-bit integer, as that requires a minimum of 67-bits of precision to minimize error. Using a fixed-point representation in a 64-bit integer (say Q16.48) for c
and a 32-bit integer b
, I plan to use multi-word arithmetic with 96 or 128 bit numbers to do the comparison without enough error to throw off the results. I'm confident that this will be enough accuracy for comparing constructable numbers that use only these 7 square roots, but I'm interested in second opinions. Any thoughts?
a
andb
are signed integers, yes? – Interdigitatedouble
here, IMO. – Treadwell