Best algorithm for series expansion of Rational function
Asked Answered
M

3

6

I need to code function in C++ which efficiently finds coefficients of Taylor Series of given rational function (P(x) / Q(x)).

Function parameters will be power of polynomials (equal in nominator and denominator), two arrays with coefficients of polynomials and number of terms in expansion.

My idea was following. Consider identity

P(x) / Q(x) = R(x) + ...

Where R(x) is a polynomial with number of terms equal to number of coefficients I need to find. Then I can multiply both sides with Q(x) and get

P(x) = R(x) * Q(x)

R(x) * Q(x) - P(x) = 0

Therefore, all coefficients should be zero. This is system of equations which have O(n^3) algorithm to solve. O(n^3) is not that fast as I wanted.

Is there any faster algorithm?

I know that coefficients of series are satisfying linear recurrence relation. This makes me think that O(n) algorithm is possible.

Maypole answered 15/4, 2014 at 19:19 Comment(6)
@DavidEisenstat Thanks, that simplifies task. But how I can find 1 / Q(x) with long division? I thought with division I only find quotient and remainder.Maypole
@DavidEisenstat Thanks! Avoiding separation also will speed up things, because multiplication will be O(nm)*, where n - number of terms, m - degree. By the way, I don't understang why my answer is marked as too broad, it requests one specific algorithm.Maypole
@DavidEisenstat There you go :) Maybe we should start a topic about the state of [algorithm] on meta. There's definitely some general decision making to be done about whether to move these questions to CSBanal
@NiklasB. Feel free. I might participate, but (i) I think MSO is seriously broken and (ii) I don't appreciate answers from people whose SO rep/MSO rep ratio is in the single digits (or worse, less than 1).Irreformable
@DavidEisenstat And there you have it, your second argument is no longer valid :)Banal
@NiklasB. Feel free to link the question when it happens =PIrreformable
I
6

The algorithm that I'm about to describe is justified mathematically by formal power series. Every function with a Taylor series has a formal power series. The converse is not true, but if we do arithmetic on functions with Taylor series and get a function with a Taylor series, then we can do the same arithmetic with formal power series and get the same answer.

The long division algorithm for formal power series is like the long division algorithm that you may have learned in school. I'll demonstrate it on the example (1 + 2 x)/(1 - x - x^2), which has coefficients equal to the Lucas numbers.

The denominator must have a nonzero constant term. We start by writing the numerator, which is the first residual.

             --------
1 - x - x^2 ) 1 + 2 x

[ We divide the residual's lowest-order term (1) by the denominator's constant term (1) and put the quotient up top.

              1
             --------
1 - x - x^2 ) 1 + 2 x

Now we multiply 1 - x - x^2 by 1 and subtract it from the current residual.

              1
             --------
1 - x - x^2 ) 1 + 2 x
              1 -   x - x^2
              -------------
                  3 x + x^2

Do it again.

              1 + 3 x
             --------
1 - x - x^2 ) 1 + 2 x
              1 -   x -   x^2
              ---------------
                  3 x +   x^2
                  3 x - 3 x^2 - 3 x^3
                  -------------------
                        4 x^2 + 3 x^3

And again.

              1 + 3 x + 4 x^2
             ----------------
1 - x - x^2 ) 1 + 2 x
              1 -   x -   x^2
              ---------------
                  3 x +   x^2
                  3 x - 3 x^2 - 3 x^3
                  -------------------
                        4 x^2 + 3 x^3
                        4 x^2 - 4 x^3 - 4 x^4
                        ---------------------
                                7 x^3 + 4 x^4

And again.

              1 + 3 x + 4 x^2 + 7 x^3
             ------------------------
1 - x - x^2 ) 1 + 2 x
              1 -   x -   x^2
              ---------------
                  3 x +   x^2
                  3 x - 3 x^2 - 3 x^3
                  -------------------
                        4 x^2 + 3 x^3
                        4 x^2 - 4 x^3 - 4 x^4
                        ---------------------
                                7 x^3 + 4 x^4
                                7 x^3 - 7 x^4 - 7 x^4
                                ---------------------
                                       11 x^4 + 7 x^5

The individual divisions were kind of boring because I used a divisor with a leading 1, but if I had used, say, 2 - 2 x - 2 x^2, then all of the coefficients in the quotient would be divided by 2.

Irreformable answered 15/4, 2014 at 20:24 Comment(1)
Thanks, this helps a lot! All expansions I wanted to find are with integer coefficients, this likely guarantees that leading coefficient will be 1.Maypole
R
3

This can be done in O(n log n) time for arbitrary P and Q of degree n. More precisely this can be done in M(n), where M(n) is the complexity of polynomial multiplication which itself can be done in O(n log n).

First of, the first n terms of a series expansion can be viewed simply as a polynomial of degree n-1.

Assume you are interested in the first n terms of the series expansion of P(x)/Q(x). There exists an algorithm that will compute the inverse of Q in M(n) time as defined above.

Inverse T(x) of Q(x) satisfies T(x) * Q(x) = 1 + O(x^N). I.e. T(x) * Q(x) is precisely 1 plus some error term whose coeficients all come after the first n terms we are interested in, so we can just drop them.

Now P(x) / Q(x) is simply P(x) * T(x) which is just another polynomial multiplication.

You can find an implementation that computes the aforementioned inverse in my open source library Altruct. See the series.h file. Assuming you already have a method that computes the product of two polyinomials, the code that calculates the inverse is about 10 lines long (a variant of divide-and-conquer).

The actual algorithm is as follows: Assume Q(x) = 1 + a1*x + a2*x^2 + .... If a0 is not 1, you can simply divide Q(x) and later its inverse T(x) with a0. Asume that at each step you have L terms of the inverse so that Q(x) * T_L(x) = 1 + x^L * E_L(x) for some error E_L(x). Initially T_1(X) = 1. If you plug this in in the above you'll get Q(x) * T_1(x) = Q(x) = 1 + x^1 * E_1(x) for some E_1(x) which means this holds for L=1. Let's now double L at each step. You can get E_L(x) from the previous step as E_L(x) = (Q(x) * T_L(x) - 1) / x^L, or implementation-wise, just drop the first L coefficients of the product. You can then compute T_2L(x) from the previous step as T_2L(x) = T_L(x) - x^L * E_L(x) * T_L(x). The error will be E_2L(x) = - E_L(x)^2. Let's now check that the induction step holds.

Q(x) * T_2L(x)
= Q(x) * (T_L(x) - x^L * E_L(x) * T_L(x))
= Q(x) * T_L(x) * (1 - x^L * E_L(x))
= (1 + x^L * E_L(x)) * (1 - x^L * E_L(x))
= 1^2 - (x^L * E_L(x))^2
= 1 + x^2L * E_2L(x)

Q.E.D.

I am pretty sure it is not possible to compute polynomial division more efficient than multiplication, and as you can see in the following table, this algorithm is only 3 times slower than a single multiplication:

   n      mul        inv      factor
10^4       24 ms      80 ms    3,33x
10^5      318 ms     950 ms    2,99x
10^6    4.162 ms  12.258 ms    2,95x
10^7  101.119 ms 294.894 ms    2,92x
Romanist answered 24/10, 2016 at 23:43 Comment(0)
P
1

If you look closely at the system you'd get with your plan, you can see that it is already diagonal, and doesn't require O(n^3) to be solved. It simply degenerates into a linear recursion (P[], Q[] and R[] being the coefficients of the corresponding polynomials):

R[0] = P[0]/Q[0]
R[n] = (P[n] - sum{0..n-1}(R[i] * Q[n-i]))/Q[0]

Since Q is a polynomial, the sum has no more than deg(Q) terms (thus taking constant time to calculate), making the overall complexity asymptotically linear. You may also look at the matrix representation of recursion for a (possibly) better asymptotic.

Publia answered 15/4, 2014 at 22:21 Comment(3)
It is linear for one R(i) value, however for all values R(0), ..., R(n) it becomes O(n^2)Maypole
No. It would become O(n^2) if Q had infinite number of coefficients. The sum has no more terms than deg(Q), so it takes constant time to calculate. I will edit my answer - it is unclear. Thanks.Publia
Then it becomes O(n*q), where q - degree of Q. Seems the same as in the accepted post.Maypole

© 2022 - 2024 — McMap. All rights reserved.