How can I write a power function myself?
Asked Answered
S

14

54

I was always wondering how I can make a function which calculates the power (e.g. 23) myself. In most languages these are included in the standard library, mostly as pow(double x, double y), but how can I write it myself?

I was thinking about for loops, but it think my brain got in a loop (when I wanted to do a power with a non-integer exponent, like 54.5 or negatives 2-21) and I went crazy ;)

So, how can I write a function which calculates the power of a real number? Thanks


Oh, maybe important to note: I cannot use functions which use powers (e.g. exp), which would make this ultimately useless.

Setter answered 21/5, 2010 at 14:1 Comment(2)
Do you want to implement in terms of the features available on a real-world processor - most include some form of exp(x) as a hardware instruction so pow is implemented in terms of exp and ln - or in terms of some machine which lacks these? If the latter, describe the desired instruction set.Copeck
possible duplicate of How to calculate an arbitrary power/root?Midden
B
54

Negative powers are not a problem, they're just the inverse (1/x) of the positive power.

Floating point powers are just a little bit more complicated; as you know a fractional power is equivalent to a root (e.g. x^(1/2) == sqrt(x)) and you also know that multiplying powers with the same base is equivalent to add their exponents.

With all the above, you can:

  • Decompose the exponent in a integer part and a rational part.
  • Calculate the integer power with a loop (you can optimise it decomposing in factors and reusing partial calculations).
  • Calculate the root with any algorithm you like (any iterative approximation like bisection or Newton method could work).
  • Multiply the result.
  • If the exponent was negative, apply the inverse.

Example:

2^(-3.5) = (2^3 * 2^(1/2)))^-1 = 1 / (2*2*2 * sqrt(2))
Benner answered 21/5, 2010 at 14:15 Comment(2)
Nice approach, but it will become kind of hard to use integers when you are dealing with e.g. pow(10, 300).Canst
How do you decompose 2^(3.6) to get a result in bc?Denham
B
23

AB = Log-1(Log(A)*B)

Edit: yes, this definition really does provide something useful. For example, on an x86, it translates almost directly to FYL2X (Y * Log2(X)) and F2XM1 (2x-1):

fyl2x
fld st(0)
frndint
fsubr st(1),st
fxch st(1)
fchs
f2xmi
fld1
faddp st(1),st
fscale
fstp st(1) 

The code ends up a little longer than you might expect, primarily because F2XM1 only works with numbers in the range -1.0..1.0. The fld st(0)/frndint/fsubr st(1),st piece subtracts off the integer part, so we're left with only the fraction. We apply F2XM1 to that, add the 1 back on, then use FSCALE to handle the integer part of the exponentiation.

Borax answered 21/5, 2010 at 14:8 Comment(2)
One problem: log^-1(x) = exp(x) = e^xSetter
Very funny recursive definition: the inverse of the logarithm is also a power, I think Koning wants to calculate it with more elemental operations (sum, rest, multiplication, division, etc.)Benner
F
20

Typically the implementation of the pow(double, double) function in math libraries is based on the identity:

pow(x,y) = pow(a, y * log_a(x))

Using this identity, you only need to know how to raise a single number a to an arbitrary exponent, and how to take a logarithm base a. You have effectively turned a complicated multi-variable function into a two functions of a single variable, and a multiplication, which is pretty easy to implement. The most commonly chosen values of a are e or 2 -- e because the e^x and log_e(1+x) have some very nice mathematical properties, and 2 because it has some nice properties for implementation in floating-point arithmetic.

The catch of doing it this way is that (if you want to get full accuracy) you need to compute the log_a(x) term (and its product with y) to higher accuracy than the floating-point representation of x and y. For example, if x and y are doubles, and you want to get a high accuracy result, you'll need to come up with some way to store intermediate results (and do arithmetic) in a higher-precision format. The Intel x87 format is a common choice, as are 64-bit integers (though if you really want a top-quality implementation, you'll need to do a couple of 96-bit integer computations, which are a little bit painful in some languages). It's much easier to deal with this if you implement powf(float,float), because then you can just use double for intermediate computations. I would recommend starting with that if you want to use this approach.


The algorithm that I outlined is not the only possible way to compute pow. It is merely the most suitable for delivering a high-speed result that satisfies a fixed a priori accuracy bound. It is less suitable in some other contexts, and is certainly much harder to implement than the repeated-square[root]-ing algorithm that some others have suggested.

If you want to try the repeated square[root] algorithm, begin by writing an unsigned integer power function that uses repeated squaring only. Once you have a good grasp on the algorithm for that reduced case, you will find it fairly straightforward to extend it to handle fractional exponents.

Flaccid answered 21/5, 2010 at 14:22 Comment(0)
D
9

There are two distinct cases to deal with: Integer exponents and fractional exponents.

For integer exponents, you can use exponentiation by squaring.

def pow(base, exponent):
    if exponent == 0:
        return 1
    elif exponent < 0:
        return 1 / pow(base, -exponent)
    elif exponent % 2 == 0:
        half_pow = pow(base, exponent // 2)
        return half_pow * half_pow
    else:
        return base * pow(base, exponent - 1)

The second "elif" is what distinguishes this from the naïve pow function. It allows the function to make O(log n) recursive calls instead of O(n).

For fractional exponents, you can use the identity a^b = C^(b*log_C(a)). It's convenient to take C=2, so a^b = 2^(b * log2(a)). This reduces the problem to writing functions for 2^x and log2(x).

The reason it's convenient to take C=2 is that floating-point numbers are stored in base-2 floating point. log2(a * 2^b) = log2(a) + b. This makes it easier to write your log2 function: You don't need to have it be accurate for every positive number, just on the interval [1, 2). Similarly, to calculate 2^x, you can multiply 2^(integer part of x) * 2^(fractional part of x). The first part is trivial to store in a floating point number, for the second part, you just need a 2^x function over the interval [0, 1).

The hard part is finding a good approximation of 2^x and log2(x). A simple approach is to use Taylor series.

Deidradeidre answered 23/5, 2010 at 18:5 Comment(0)
R
8

Per definition:

a^b = exp(b ln(a))

where exp(x) = 1 + x + x^2/2 + x^3/3! + x^4/4! + x^5/5! + ...

where n! = 1 * 2 * ... * n.

In practice, you could store an array of the first 10 values of 1/n!, and then approximate

exp(x) = 1 + x + x^2/2 + x^3/3! + ... + x^10/10!

because 10! is a huge number, so 1/10! is very small (2.7557319224⋅10^-7).

Reproductive answered 21/5, 2010 at 14:54 Comment(2)
@Pindatjuh: Thanks for trying to make the formatting a bit nicer. But you accidently (I hope) managed to get the formulae entirely wrong. When one writes x^n/n! one always mean (x^n)/(n!), not x^(n/n!).Reproductive
McLaurin (and Taylor) series, one of the most useful things you can learn in Calculus lessons :-)Benner
C
5

Wolfram functions gives a wide variety of formulae for calculating powers. Some of them would be very straightforward to implement.

Curl answered 21/5, 2010 at 14:13 Comment(0)
R
4

For positive integer powers, look at exponentiation by squaring and addition-chain exponentiation.

Roderickroderigo answered 21/5, 2010 at 14:22 Comment(0)
O
2

Using three self implemented functions iPow(x, n), Ln(x) and Exp(x), I'm able to compute fPow(x, a), x and a being doubles. Neither of the functions below use library functions, but just iteration.

Some explanation about functions implemented:

(1) iPow(x, n): x is double, n is int. This is a simple iteration, as n is an integer.

(2) Ln(x): This function uses the Taylor Series iteration. The series used in iteration is Σ (from int i = 0 to n) {(1 / (2 * i + 1)) * ((x - 1) / (x + 1)) ^ (2 * n + 1)}. The symbol ^ denotes the power function Pow(x, n) implemented in the 1st function, which uses simple iteration.

(3) Exp(x): This function, again, uses the Taylor Series iteration. The series used in iteration is Σ (from int i = 0 to n) {x^i / i!}. Here, the ^ denotes the power function, but it is not computed by calling the 1st Pow(x, n) function; instead it is implemented within the 3rd function, concurrently with the factorial, using d *= x / i. I felt I had to use this trick, because in this function, iteration takes some more steps relative to the other functions and the factorial (i!) overflows most of the time. In order to make sure the iteration does not overflow, the power function in this part is iterated concurrently with the factorial. This way, I overcame the overflow.

(4) fPow(x, a): x and a are both doubles. This function does nothing but just call the other three functions implemented above. The main idea in this function depends on some calculus: fPow(x, a) = Exp(a * Ln(x)). And now, I have all the functions iPow, Ln and Exp with iteration already.

n.b. I used a constant MAX_DELTA_DOUBLE in order to decide in which step to stop the iteration. I've set it to 1.0E-15, which seems reasonable for doubles. So, the iteration stops if (delta < MAX_DELTA_DOUBLE) If you need some more precision, you can use long double and decrease the constant value for MAX_DELTA_DOUBLE, to 1.0E-18 for example (1.0E-18 would be the minimum).

Here is the code, which works for me.

#define MAX_DELTA_DOUBLE 1.0E-15
#define EULERS_NUMBER 2.718281828459045

double MathAbs_Double (double x) {
    return ((x >= 0) ? x : -x);
}

int MathAbs_Int (int x) {
    return ((x >= 0) ? x : -x);
}

double MathPow_Double_Int(double x, int n) {
    double ret;
    if ((x == 1.0) || (n == 1)) {
        ret = x;
    } else if (n < 0) {
        ret = 1.0 / MathPow_Double_Int(x, -n);
    } else {
        ret = 1.0;
        while (n--) {
            ret *= x;
        }
    }
    return (ret);
}

double MathLn_Double(double x) {
    double ret = 0.0, d;
    if (x > 0) {
        int n = 0;
        do {
            int a = 2 * n + 1;
            d = (1.0 / a) * MathPow_Double_Int((x - 1) / (x + 1), a);
            ret += d;
            n++;
        } while (MathAbs_Double(d) > MAX_DELTA_DOUBLE);
    } else {
        printf("\nerror: x < 0 in ln(x)\n");
        exit(-1);
    }
    return (ret * 2);
}

double MathExp_Double(double x) {
    double ret;
    if (x == 1.0) {
        ret = EULERS_NUMBER;
    } else if (x < 0) {
        ret = 1.0 / MathExp_Double(-x);
    } else {
        int n = 2;
        double d;
        ret = 1.0 + x;
        do {
            d = x;
            for (int i = 2; i <= n; i++) {
                d *= x / i;
            }
            ret += d;
            n++;
        } while (d > MAX_DELTA_DOUBLE);
    }
    return (ret);
}

double MathPow_Double_Double(double x, double a) {
    double ret;
    if ((x == 1.0) || (a == 1.0)) {
        ret = x;
    } else if (a < 0) {
        ret = 1.0 / MathPow_Double_Double(x, -a);
    } else {
        ret = MathExp_Double(a * MathLn_Double(x));
    }
    return (ret);
}
Overcompensation answered 3/11, 2014 at 8:49 Comment(2)
Don't use Taylor series to approximate a function over an interval: lolengine.net/blog/2011/12/21/better-function-approximationsJoanne
I've read the document in the link. Though the idea in the document depend on some scientific result, the iteration in my code above takes at most 15 to 20 steps (till delta < 1.0E-15). I've not tested thoroughly, but as the number of significant figures (after the decimal point) one can handle for double types in c is 15 at most, I'm a little convinced that the code above does it's job as assumed.Overcompensation
P
1

It's an interesting exercise. Here's some suggestions, which you should try in this order:

  1. Use a loop.
  2. Use recursion (not better, but interesting none the less)
  3. Optimize your recursion vastly by using divide-and-conquer techniques
  4. Use logarithms
Pilocarpine answered 21/5, 2010 at 14:3 Comment(1)
Do you know any algorithms that exist? ThanksSetter
B
1

You can found the pow function like this:

static double pows (double p_nombre, double p_puissance)
{
    double nombre   = p_nombre;
    double i=0;
    for(i=0; i < (p_puissance-1);i++){
          nombre = nombre * p_nombre;
       }
    return (nombre);
}

You can found the floor function like this:

static double floors(double p_nomber)
{
    double x =  p_nomber;
    long partent = (long) x; 

    if (x<0)
    {
        return (partent-1);
    }
    else
    {
        return (partent);
    }
}

Best regards

Berkley answered 16/12, 2015 at 7:51 Comment(1)
the pows function as implemented here only handles integer exponents.Canner
D
0

A better algorithm to efficiently calculate positive integer powers is repeatedly square the base, while keeping track of the extra remainder multiplicands. Here is a sample solution in Python that should be relatively easy to understand and translate into your preferred language:

def power(base, exponent):
  remaining_multiplicand = 1
  result = base

  while exponent > 1:
    remainder = exponent % 2
    if remainder > 0:
      remaining_multiplicand = remaining_multiplicand * result
    exponent = (exponent - remainder) / 2
    result = result * result

  return result * remaining_multiplicand

To make it handle negative exponents, all you have to do is calculate the positive version and divide 1 by the result, so that should be a simple modification to the above code. Fractional exponents are considerably more difficult, since it means essentially calculating an nth-root of the base, where n = 1/abs(exponent % 1) and multiplying the result by the result of the integer portion power calculation:

power(base, exponent - (exponent % 1))

You can calculate roots to a desired level of accuracy using Newton's method. Check out wikipedia article on the algorithm.

Dolor answered 21/5, 2010 at 14:45 Comment(3)
I meant to mention that the reason that repeated squaring is better is because it is far fewer steps for large powers than a simple iterating loop. If you are familiar with big O notation, the naive solution is O(n), while the repeated squaring approach is O(log(n))Dolor
this power function only handles integer exponentsCanner
Fractional exponents are usually calculated using exp(log(x)*y), not by the wierd roots method you mention.Refuge
B
0

I am using fixed point long arithmetics and my pow is log2/exp2 based. Numbers consist of:

  • int sig = { -1; +1 } signum
  • DWORD a[A+B] number
  • A is number of DWORDs for integer part of number
  • B is number of DWORDs for fractional part

My simplified solution is this:

//---------------------------------------------------------------------------
longnum exp2 (const longnum &x)
{
    int i,j;
    longnum c,d;
    c.one();
    if (x.iszero()) return c;
    i=x.bits()-1;
    for(d=2,j=_longnum_bits_b;j<=i;j++,d*=d)
    if (x.bitget(j))
    c*=d;
    for(i=0,j=_longnum_bits_b-1;i<_longnum_bits_b;j--,i++)
    if (x.bitget(j))
    c*=_longnum_log2[i];
    if (x.sig<0) {d.one(); c=d/c;}
    return c;
}
//---------------------------------------------------------------------------
longnum log2 (const longnum &x)
{
    int i,j;
    longnum c,d,dd,e,xx;
    c.zero(); d.one(); e.zero(); xx=x;
    if (xx.iszero()) return c; //**** error: log2(0) = infinity
    if (xx.sig<0) return c; //**** error: log2(negative x) ... no result possible
    if (d.geq(x,d)==0) {xx=d/xx; xx.sig=-1;}
    i=xx.bits()-1;
    e.bitset(i); i-=_longnum_bits_b;
    for (;i>0;i--,e>>=1) // integer part
    {
        dd=d*e;
        j=dd.geq(dd,xx);
        if (j==1) continue; // dd> xx
        c+=i; d=dd;
        if (j==2) break; // dd==xx
    }
    for (i=0;i<_longnum_bits_b;i++) // fractional part
    {
        dd=d*_longnum_log2[i];
        j=dd.geq(dd,xx);
        if (j==1) continue; // dd> xx
        c.bitset(_longnum_bits_b-i-1); d=dd;
        if (j==2) break; // dd==xx
    }
    c.sig=xx.sig;
    c.iszero();
    return c;
}
//---------------------------------------------------------------------------
longnum pow (const longnum &x,const longnum &y)
{
    //x^y = exp2(y*log2(x))
    int ssig=+1; longnum c; c=x;
    if (y.iszero()) {c.one(); return c;} // ?^0=1
    if (c.iszero()) return c; // 0^?=0
    if (c.sig<0)
    {
        c.overflow(); c.sig=+1;
        if (y.isreal()) {c.zero(); return c;} //**** error: negative x ^ noninteger y
        if (y.bitget(_longnum_bits_b)) ssig=-1;
    }
    c=exp2(log2(c)*y); c.sig=ssig; c.iszero();
    return c;
}
//---------------------------------------------------------------------------

where:

_longnum_bits_a = A*32
_longnum_bits_b = B*32
_longnum_log2[i] = 2 ^ (1/(2^i))  ... precomputed sqrt table 
_longnum_log2[0]=sqrt(2)  
_longnum_log2[1]=sqrt[tab[0]) 
_longnum_log2[i]=sqrt(tab[i-1])
longnum::zero() sets *this=0
longnum::one() sets *this=+1
bool longnum::iszero() returns (*this==0)
bool longnum::isnonzero() returns (*this!=0)
bool longnum::isreal() returns (true if fractional part !=0)
bool longnum::isinteger() returns (true if fractional part ==0)
int longnum::bits() return num of used bits in number counted from LSB
longnum::bitget()/bitset()/bitres()/bitxor() are bit access
longnum.overflow() rounds number if there was a overflow X.FFFFFFFFFF...FFFFFFFFF??h  -> (X+1).0000000000000...000000000h
int longnum::geq(x,y)  is comparition |x|,|y| returns 0,1,2 for (<,>,==)

All you need to understand this code is that numbers in binary form consists of sum of powers of 2, when you need to compute 2^num then it can be rewritten as this

  • 2^(b(-n)*2^(-n) + ... + b(+m)*2^(+m))

where n are fractional bits and m are integer bits. multiplication/division by 2 in binary form is simple bit shifting so if you put it all together you get code for exp2 similar to my. log2 is based on binaru search...changing the result bits from MSB to LSB until it matches searched value (very similar algorithm as for fast sqrt computation). hope this helps clarify things...

Botti answered 11/8, 2013 at 7:26 Comment(1)
P.S. i am using base 2 for log/exp because of simplicity of computation (bases 10 and e are not suitable for fast computation with binary numbers, if ive used BCD coded numbers than base 10 whould be the base used)Botti
G
0

A lot of approaches are given in other answers. Here is something that I thought may be useful in case of integral powers.

In the case of integer power x of nx, the straightforward approach would take x-1 multiplications. In order to optimize this, we can use dynamic programming and reuse an earlier multiplication result to avoid all x multiplications. For example, in 59, we can, say, make batches of 3, i.e. calculate 53 once, get 125 and then cube 125 using the same logic, taking only 4 multiplcations in the process, instead of 8 multiplications with the straightforward way.

The question is what is the ideal size of the batch b so that the number of multiplications is minimum. So let's write the equation for this. If f(x,b) is the function representing the number of multiplications entailed in calculating nx using the above method, then

> f(x,b) = (x/b - 1) + (b-1)

Explanation: A product of batch of p numbers will take p-1 multiplications. If we divide x multiplications into b batches, there would be (x/b)-1 multiplications required inside each batch, and b-1 multiplications required for all b batches.

Now we can calculate the first derivative of this function with respect to b and equate it to 0 to get the b for the least number of multiplications.

f'(x,b) = -x/b<sup>2</sup> + 1 = 0

enter image description here

Now put back this value of b into the function f(x,b) to get the least number of multiplications:

enter image description here

For all positive x, this value is lesser than the multiplications by the straightforward way.

Ginn answered 6/8, 2016 at 8:46 Comment(0)
H
0

maybe you can use taylor series expansion. the Taylor series of a function is an infinite sum of terms that are expressed in terms of the function's derivatives at a single point. For most common functions, the function and the sum of its Taylor series are equal near this point. Taylor's series are named after Brook Taylor who introduced them in 1715.

Hockenberry answered 5/10, 2020 at 18:8 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.