How are logarithms programmed? [closed]
Asked Answered
V

1

12

Are they just figured out using the same mechanism as a linear search or is the range narrowed down somehow, similar to a binary search.

Vasiliu answered 24/5, 2012 at 6:9 Comment(4)
'brute force' and 'algorithm' are not mutually exclusive, your question poses a false dichotomy. And the answer is, YES, there's an algorithm in doing so.Urochrome
Ah, I feel silly having forgot that brute force was actually an algorithm. Thanks.Vasiliu
"brute force" is a property of algorithms, where you apply more computing power to get away with less thinking (which you could have used to find a less brute algorithm).Lubricious
What algorithm is used by computers to calculate logarithms?Osteogenesis
B
21

The implementation of a function such as the natural logarithm in any decent math library will keep the error below an ulp (unit of least precision). The goal of the implementor of a math library function is to find some optimal approximation, one that achieves the desired accuracy with as few calculations as possible. Taylor series are typically a poor choice because far too many terms are needed to achieve the desired accuracy.

The typical weapons of choice are to reduce the range from all representable real numbers to some very small region, and then use some optimal approximation that yields an accurate approximation of the desired function over this narrow range. The typical weapons of choice for this optimal approximation are a polynomial or a rational polynomial (ratio of two polynomials). The implementation just contains the polynomial coefficients. Those coefficients are constructed by some optimizing technique such as the Remes Exchange algorithm.

In the case of the natural logarithm, there is an easy way to reduce the range. Real numbers are almost universally represented in terms of a mantissa and an exponent: x=m*2p, where p is an integer and m is between 1 and 2. Thus log(x) = log(m)+p*log(2). The latter term, p*log(2), is just a multiplication by a known constant. The problem thus reduces to finding the logarithm of a number between 1 and 2 (or between 1/2 and 1). A further range reduction can be made by using the fact that √2 is logarithmically in the middle of [1,2). Thus all that is needed is a way to calculate the logarithm of a number between 1 and √2. This is typically done with a rational polynomial. The ratio of a second order polynomial polynomial to a third order works quite nicely for this.

Become answered 24/5, 2012 at 12:3 Comment(3)
.5 ULP is an ambitious goal; it is the same as correctly rounded, meaning the representable number nearest the mathematically exact result must be returned. The CRlibm project (lipforge.ens-lyon.fr/www/crlibm) seeks to do this, but it is incomplete (e.g., inverse hyperbolic functions are not provided). Faithful rounding (less than 1 ULP) is a more attainable goal, and typical libraries allow errors of several ULP. Rational functions are avoided since division is slow on common processors. E.g., a tangent might use a rational function, but sine and logarithm will use simple polynomials.Renferd
Good comment. 0.5 ULP is overly ambitious. Regarding rational polynomials versus simple ones: rationals can be better if the reduction in the degree more than offsets the increased cost of a division vs a multiplication. I found multiple implementations of log that use a rational polynomial. Log is a very nice function as far as approximation. The worst case errors occur with numbers near unity, and even here the error can be kept to within an ULP.Become
Reducing the range to [1.0, 2.0) produces large errors near log(0.999...), as David Hammen commented. The implementations I know of avoid this problem by reducing to a range that has 1.0 in the interior, like [2/3, 4/3) or [sqrt(0.5), sqrt(2.0)). In programming terms, it's just a couple of extra instructions to do this: if (m > constant) { p+=1; m*=0.5;}Graciagracie

© 2022 - 2024 — McMap. All rights reserved.