Optimize conversion between list of integer coefficients and its long integer representation
Asked Answered
T

4

8

I'm trying to optimize a polynomial implementation of mine. In particular I'm dealing with polynomials with coefficients modulo n(might be >2^64) and modulo a polynomial in the form x^r - 1(r is < 2^64). At the moment I represent the coefficient as a list of integers(*) and I've implemented all the basic operations in the most straightforward way.

I'd like the exponentiation and multiplication to be as fast as possible, and to obtain this I've already tried different approaches. My current approach is to convert the lists of coefficients into huge integers multiply the integers and unpack back the coefficients.

The problem is that packing and unpacking takes a lot of time.

So, is there a way of improving my "pack/unpack" functions?

def _coefs_to_long(coefs, window):
    '''Given a sequence of coefficients *coefs* and the *window* size return a
    long-integer representation of these coefficients.
    '''

    res = 0
    adder = 0
    for k in coefs:
        res += k << adder
        adder += window
    return res
    #for k in reversed(coefs): res = (res << window) + k is slower


def _long_to_coefs(long_repr, window, n):
    '''Given a long-integer representing coefficients of size *window*, return
    the list of coefficients modulo *n*.
    '''

    mask = 2**window - 1
    coefs = [0] * (long_repr.bit_length() // window + 1)
    for i in xrange(len(coefs)):
        coefs[i] = (long_repr & mask) % n
        long_repr >>= window

    # assure that the returned list is never empty, and hasn't got an extra 0.
    if not coefs:
        coefs.append(0)
    elif not coefs[-1] and len(coefs) > 1:
        coefs.pop()

    return coefs

Note that I do not choose n, it is an input from the user, and my program wants to prove its primality(using the AKS test), so I can't factorize it.


(*) I've tried several approaches:

  1. Using a numpy array instead of a list and multiply using numpy.convolve. It's fast for n < 2^64 but terribly slow for n > 2^64[also I'd like to avoid using external libraries]
  2. Using scipy.fftconvolve. Doesn't work at all for n > 2^64.
  3. Represent the coefficients as integers from the start(without converting them every time). The problem is that I don't know of an easy way to do the mod x^r -1 operation without converting the integer to a list of coefficients(which defeats the reason of using this representation).
Tanana answered 12/9, 2012 at 21:1 Comment(17)
Probably you should narrow down the scope of the question to some reasonable, answerable scale of problem.Pfosi
Yes, I was thinking this too. I'll edit my question when I have some time and point out exactly what I'd like to optimize.Tanana
I don't know whether this solves the entire problem, but if you look for "Implicit Manipulation of Polynomials Using Zero-Suppressed BDDs", you will find a technique to efficiently manipulate polynomials, including testing for equality.Paletot
Can you use numpy or other numerical library?Palazzo
@przemo_li: I've actually tried to use numpy and numpy.convolve for multiplication(and it's written in my question), but, it's actually slower than this implementation[take into account that I need to work with big integers, also for coefficients. With word-size integers numpy is a lot faster.]. @harold: I'll now try to see.Tanana
@Tanana how did that go? Is the ZDD solution applicable to this problem?Paletot
@harold I didn't try it. Did not have much time lately. I searched for that article and found it only on sale. Maybe you know if there is some free-published version on-line? Eventually I'll buy it. Maybe before I'll look into BDDs myself.Tanana
@Tanana here you go: cecs.uci.edu/~papers/compendium94-03/papers/1995/edt95/pdffiles/… It's a short paper that doesn't explain the nuts and bolts of a 0-suppressed BDD, but you can find that elsewherePaletot
Wow, thanks. I'll look into it when I have time(and unfortunately this means next week 'cause I'm busy this week-end).Tanana
@harold I've tried to play around with ZBDDs but I do not think that they are much applicable to my use case. The problem is that it's not easy to implement modulo operations on that representation. Anyway, I think they are really interesting and probably really useful in other contexts.Tanana
@Tanana oh, too bad, I thought they were promisingPaletot
related: AKS Primes algorithm in PythonMolluscoid
@J.F.Sebastian Probably that link could be useful for other people that want to implement AKS, but it's of little help to me since there isn't anything python-related in the answer and that's what I'm interested in. I already have a decent flint implementation, I just want to push the pure-python approach to its limits.Tanana
@Bakuriu: optimization advice that is also applicable to pure Python: do less. In particular, you could use algorithm/data representation that doesn't require frequent conversion back-and-forth i.e., if you read the links (whether they use pseudo-code, C++ or some other language) you might find something that can help you to eliminate the above two function from your question. Execution time of the code that is not there is zero. Even if you find nothing; a better understanding and familiarity with different approaches to the same problem could give you other optimization ideas.Molluscoid
@J.F.Sebastian The fact is that I've already read them and tried those approaches,ans I've found out that my approach is faster.Tanana
I suppose recommending PyPy won't help you much. This is exactly the type of code that CPython is exceedingly bad at and PyPy is very good at. You should get C ballpark performance if you manage your memory allocations well.Avis
@AntsAasma I actually tried the code on PyPy and found that it was significantly slower than CPython. But I'm not a PyPy user, so probably I wrote the code in a way that PyPy does not handle well. Anyway, my question is much more about CPython and "algorithm optimization" than about using a specific implementation.Tanana
T
1

I found a way to optimize the conversions, even though I still hope that someone could help me improve them even more, and hopefully find some other clever idea.

Basically what's wrong with those functions is that they have some kind of quadratic memory allocation behaviour, when packing the integer, or when unpacking it. (See this post of Guido van Rossum for an other example of this kind of behaviour).

After I realized this I've decided to give a try with the Divide et Impera principle, and I've obtained some results. I simply divide the array in two parts, convert them separately and eventually join the results(later I'll try to use an iterative version similar to the f5 in Rossum's post[edit: it doesn't seem to be much faster]).

The modified functions:

def _coefs_to_long(coefs, window):
    """Given a sequence of coefficients *coefs* and the *window* size return a
    long-integer representation of these coefficients.
    """

    length = len(coefs)
    if length < 100:
        res = 0
        adder = 0
        for k in coefs:
            res += k << adder
            adder += window
        return res
    else:
        half_index = length // 2
        big_window = window * half_index
        low = _coefs_to_long(coefs[:half_index], window)
        high = _coefs_to_long(coefs[half_index:], window)
        return low + (high << big_window)


def _long_to_coefs(long_repr, window, n):
    """Given a long-integer representing coefficients of size *window*, return
    the list of coefficients modulo *n*.
    """

    win_length = long_repr.bit_length() // window
    if win_length < 256:
        mask = 2**window - 1
        coefs = [0] * (long_repr.bit_length() // window + 1)
        for i in xrange(len(coefs)):
            coefs[i] = (long_repr & mask) % n
            long_repr >>= window

        # assure that the returned list is never empty, and hasn't got an extra 0.
        if not coefs:
            coefs.append(0)
        elif not coefs[-1] and len(coefs) > 1:
            coefs.pop()

        return coefs
    else:
        half_len = win_length // 2
        low = long_repr & (((2**window) ** half_len) - 1)
        high = long_repr >> (window * half_len)
        return _long_to_coefs(low, window, n) + _long_to_coefs(high, window, n) 

And the results:

>>> import timeit
>>> def coefs_to_long2(coefs, window):
...     if len(coefs) < 100:
...         return coefs_to_long(coefs, window)
...     else:
...         half_index = len(coefs) // 2
...         big_window = window * half_index
...         least = coefs_to_long2(coefs[:half_index], window) 
...         up = coefs_to_long2(coefs[half_index:], window)
...         return least + (up << big_window)
... 
>>> coefs = [1, 2, 3, 1024, 256] * 567
>>> # original function
>>> timeit.timeit('coefs_to_long(coefs, 11)', 'from __main__ import coefs_to_long, coefs',
...               number=1000)/1000
0.003283214092254639
>>> timeit.timeit('coefs_to_long2(coefs, 11)', 'from __main__ import coefs_to_long2, coefs',
...               number=1000)/1000
0.0007998988628387451
>>> 0.003283214092254639 / _
4.104536516782767
>>> coefs = [2**64, 2**31, 10, 107] * 567
>>> timeit.timeit('coefs_to_long(coefs, 66)', 'from __main__ import coefs_to_long, coefs',...               number=1000)/1000

0.009775240898132325
>>> 
>>> timeit.timeit('coefs_to_long2(coefs, 66)', 'from __main__ import coefs_to_long2, coefs',
...               number=1000)/1000
0.0012255229949951173
>>> 
>>> 0.009775240898132325 / _
7.97638309362875

As you can see this version gives quite a speed up to the conversion, from 4 to 8 times faster(and bigger the input, bigger is the speed up). A similar result is obtained with the second function:

>>> import timeit
>>> def long_to_coefs2(long_repr, window, n):
...     win_length = long_repr.bit_length() // window
...     if win_length < 256:
...         return long_to_coefs(long_repr, window, n)
...     else:
...         half_len = win_length // 2
...         least = long_repr & (((2**window) ** half_len) - 1)
...         up = long_repr >> (window * half_len)
...         return long_to_coefs2(least, window, n) + long_to_coefs2(up, window, n)
... 
>>> long_repr = coefs_to_long([1,2,3,1024,512, 0, 3] * 456, 13)
>>> # original function
>>> timeit.timeit('long_to_coefs(long_repr, 13, 1025)', 'from __main__ import long_to_coefs, long_repr', number=1000)/1000
0.005114212036132813
>>> timeit.timeit('long_to_coefs2(long_repr, 13, 1025)', 'from __main__ import long_to_coefs2, long_repr', number=1000)/1000
0.001701267957687378
>>> 0.005114212036132813 / _
3.006117885794327
>>> long_repr = coefs_to_long([1,2**33,3**17,1024,512, 0, 3] * 456, 40)
>>> timeit.timeit('long_to_coefs(long_repr, 13, 1025)', 'from __main__ import long_to_coefs, long_repr', number=1000)/1000
0.04037192392349243
>>> timeit.timeit('long_to_coefs2(long_repr, 13, 1025)', 'from __main__ import long_to_coefs2, long_repr', number=1000)/1000
0.005722791910171509
>>> 0.04037192392349243 / _
7.0545853417694

I've tried to avoid more memory reallocation in the first function passing around the start and end indexes and avoiding slicing, but it turns out that this slows the function down quite much for small inputs and it's a just a bit slower for real-case inputs. Maybe I could try to mix them, even though I don't think I'll obtain much better results.


I've edited my question in the last period therefore some people gave me some advice with a different aim then what I required recently. I think it's important to clarify a bit the results pointed out by different sources in the comments and the answers, so that they can be useful for other people looking to implement fast polynomials and or AKS test.

  • As J.F. Sebastian pointed out the AKS algorithm receive many improvements, and so trying to implement an old version of the algorithm will always result in a very slow program. This does not exclude the fact that if you already have a good implementation of AKS you can speed it up improving the polynomials.
  • If you are interested in coefficients modulo a small n(read: word-size number) and you don't mind external dependencies,then go for numpy and use numpy.convolve or scipy.fftconvolve for the multiplication. It will be much faster than anything you can write. Unfortunately if n is not word size you can't use scipy.fftconvolve at all, and also numpy.convolve becomes slow as hell.
  • If you don't have to do modulo operations(on the coefficients and on the polynomial), then probably using ZBDDs is a good idea(as pointed out by harold), even though I do not promise spectacular results[even though I think it's really interesting and you ought to read Minato's paper].
  • If you don't have to do modulo operations on the coefficients then probably using an RNS representation, as stated by Origin, is a good idea. Then you can combine multiple numpy arrays to operate efficiently.
  • If you want a pure-python implementation of polynomials with coefficient modulo a big n, then my solution seems to be the fastest. Even though I did not try to implement fft multiplication between arrays of coefficients in python(which may be faster).
Tanana answered 21/10, 2012 at 12:2 Comment(0)
E
2

Unless you're doing this to learn, why reinvent the wheel? A different approach would be to write a python wrapper to some other polynomial library or program, if such a wrapper doesn't exist already.

Try PARI/GP. It's surprisingly fast. I recently wrote a custom C code, which took me two days to write and turned out to only be 3 times faster than a two-line PARI/GP script. I would bet that a python code calling PARI would end out to be faster than whatever you implement in python alone. There's even a module for calling PARI from python: https://code.google.com/p/pari-python/

Essary answered 20/9, 2012 at 14:27 Comment(2)
I'm looking for such optimizations because I also want to learn. I've already written a C-extension that uses the flint library to do the computations and it's from 10 to 80 times faster depending on the task(and I already know some way of optimizing this C implementation, in particular since I know the maximum size of the polynomials I could use the _ prefixed functions and avoid a lot of reallocation). Thank you for pointing out about PARI/GP, I didn't know it and I'll look into it. Anyway, I'd appreciate a pure-python efficient implementation of polynomials.Tanana
Fair enough! Probably what I wrote should have been a comment and not an answer -- I'm still getting used to Stack Overflow.Essary
A
2

You could try using residual number systems to represent the coefficients of your polynomial. You would also split up your coefficients into smaller integers as you do now, but you don't need to convert them back to a huge integer to do multiplications or other operations. This should not require much reprogramming effort.

The basic principle of residual number systems is the unique representation of numbers using modular arithmetic. The whole theory surrounding RNS allows you to do your operations on the small coefficients.

edit: a quick example:

Suppose you represent your large coefficients in an RNS with moduli 11 and 13. Your coefficients would all consist of 2 small integers (<11 and <13) that can be combined to the original (large) integer.

Suppose your polynomial is originally 33x²+18x+44. In RNS, the coefficients would respectively be (33 mod 11, 33 mod 13),(18 mod 11,18 mod 13) and (44 mod 11, 44 mod 13)=>(0,7),(7,5) and (0,5).

Multiplying your polynomial with a constant can then be done by multiplying each small coefficient with that constant and do modulo on it.

Say you multiply by 3, your coefficients will become (0,21 mod 13)=(0,8), (21 mod 11,15 mod 13)=(10,2) and (0 mod 11,15 mod 13)=(0,2). There has been no need to convert the coefficients back to their large integer.

To check if our multiplication has worked, we can convert the new coefficients back to their large representation. This requires 'solving' each set of coefficients as a modular system. For the first coefficients (0,8) we would need to solve x mod 11=0 and x mod 13 = 8. This should not be too hard to implement. In this example you can see that x=99 is a valid solution (modulo 13*11)

We then get 99x²+54x+132, the correct multiplied polynomial. Multiplying with other polynomials is similar (but require you to multiply the coefficients with each other in a pairwise manner). The same goes for addition.

For your use case, you could choose your n based on the number of coefficients you want or the their size.

Astra answered 18/10, 2012 at 21:28 Comment(6)
Can you point me to some article/book that explain a bit more these RNS? Anyway you mean that I should represent the coefficients by a certain number of smaller integers and then operate with an array of these smaller numbers(which would probably allow me to use numpy)?Tanana
I don't have any specific reference but there are numerous tutorials out there like this, this and this. Any of them should get you started. I do think numpy should be feasible.Astra
The basis of the algorithm is the Chinese Remainder Theorem which assures the unique representation of a number in RNS. A proof is also be quite interesting if you want to learn more about it.Astra
Could you explain how to perform the modulo n operation in an efficient way? To me it seems that I have to convert each coefficient to decimal, take the modulo and then re-convert to RNS. But this kind of conversion is not efficient.Tanana
I added an example. You should do all your operations on the (..,..) coefficients and only convert them back to their large size when you need themAstra
Sorry but I doubt this is suitable solution for me. The n is given as input by the user and my program wants to prove its primality, so I cannot just factorize it to get the small moduli to have the RNS representation. I have to choose a set of moduli such that I can represent all numbers up to n and then all the operations must be modulo n, and I think this is not efficient with this representation.Tanana
E
2

How about directly implementing arbitrary precision integer polynomials as a list of numpy arrays?

Let me explain: say your polynomial is Σp Ap Xp. If the large integer Ap can be represented as Ap = Σk Ap,k 264 k then the kth numpy array will contain the 64-bit int Ap,k at position p.

You could choose dense or sparse arrays according to the structure of your problem.

Implementing addition and scalar operations are just a matter of vectorizing the bignum implementation of the same operations.

Multiplication could be handled as follows: AB = Σp,k,p',k' Ap,kBp',k' 264(k+k') Xp+p'. So a naive implementation with dense arrays could lead to log64(n)2 calls to numpy.convole or scipy.fftconvolve.

The modulo operation should be easy to implement since it is a linear function of the left hand term and the right hand term has small coefficients.

EDIT here are some more explanations

Instead of representing the polynomial as a list of arbitrary precision numbers (themselves represented as lists of 64-bit "digits"), transpose the representation so that:

  • your polynomial is represented as a list of arrays
  • the kth array contains the kth "digit" of each coefficient

If only a few of your coefficients are very large then the arrays will have mostly 0s in them so it may be worthwhile using sparse arrays.

Call Ap,k the kth digit of the pth coefficient.

Note the analogy with large integer representations: where a large integer would be represented as

x = Σk xk 264 k

your polynomial A is represented in the same way as

A = Σk Ak 264 k Ak = Σk Ap,k Xp

To implement addition, you simply pretend your list of arrays is a list of simple digits and implement addition as usual for large integers (watch out to replace if then conditionals by numpy.where).

To implement multiplication, you will find you need to make log64(n)2 polynomial multiplications.

To implement the modulo operation on the coefficients, is again a simple case of translating the modulo operation on a large integer.

To take the modulo by a polynomial with small coefficients, use the linearity of this operation:

A mod (Xr - 1) = (Σk Ak 264 k) mod (Xr - 1)

= Σk 264 k (Ak mod (Xr - 1))

Emmalynn answered 23/10, 2012 at 23:33 Comment(6)
Could you explain a bit more this idea? Also, you wrote 2^64.k, the dot is a multiplication? Anyway, as I said I'd like to do this without using third-party libraries, nonetheless yours is an interesting solution.Tanana
Yep, dot was multiplication, I removed it though. I added some details, not sure which part needs more clarification though ..Emmalynn
Okay, I can see the big picture clearly now, but, alas, until next Tuesday I'm really busy and I wont have time to implement and study your solution.Tanana
Yeah you have your work cut out for you if you go down this road, though if your goal was learning you will learn a lot! However since I found this question really interesting I wrote a partial implementation, maybe it will get you started. I'll post some of it here ..Emmalynn
I've been trying to implement this now(sorry for being so late!), but there is something I do not understand. By what I understand, whenever I do Rp,k = Ap,k + Bp,k and I have a result bigger than 2^64 I should add the carry to Rp,k+1, but numpy will not complain for this cases, and thus I'd be stack checking every coefficient in the result, try to understand if there was an overflow by hand and eventually manually add the carry. Is there a smarter way to do this with numpy?Tanana
Yes the general idea is you will need to handle overflow. But that can be made much simpler by taking a digit size of half-word and storing it in a full word. I've implemented an example here: github.com/gsidier/bigpoly - look at classes bigpoly.HalfInt and HalfPoly. There are still bugs with polynomial multiplication, due to overflow also, so hang in there .. The Poly64 class shows the other (more complicated way) of doing it by explicitly coding the carry logic.Emmalynn
T
1

I found a way to optimize the conversions, even though I still hope that someone could help me improve them even more, and hopefully find some other clever idea.

Basically what's wrong with those functions is that they have some kind of quadratic memory allocation behaviour, when packing the integer, or when unpacking it. (See this post of Guido van Rossum for an other example of this kind of behaviour).

After I realized this I've decided to give a try with the Divide et Impera principle, and I've obtained some results. I simply divide the array in two parts, convert them separately and eventually join the results(later I'll try to use an iterative version similar to the f5 in Rossum's post[edit: it doesn't seem to be much faster]).

The modified functions:

def _coefs_to_long(coefs, window):
    """Given a sequence of coefficients *coefs* and the *window* size return a
    long-integer representation of these coefficients.
    """

    length = len(coefs)
    if length < 100:
        res = 0
        adder = 0
        for k in coefs:
            res += k << adder
            adder += window
        return res
    else:
        half_index = length // 2
        big_window = window * half_index
        low = _coefs_to_long(coefs[:half_index], window)
        high = _coefs_to_long(coefs[half_index:], window)
        return low + (high << big_window)


def _long_to_coefs(long_repr, window, n):
    """Given a long-integer representing coefficients of size *window*, return
    the list of coefficients modulo *n*.
    """

    win_length = long_repr.bit_length() // window
    if win_length < 256:
        mask = 2**window - 1
        coefs = [0] * (long_repr.bit_length() // window + 1)
        for i in xrange(len(coefs)):
            coefs[i] = (long_repr & mask) % n
            long_repr >>= window

        # assure that the returned list is never empty, and hasn't got an extra 0.
        if not coefs:
            coefs.append(0)
        elif not coefs[-1] and len(coefs) > 1:
            coefs.pop()

        return coefs
    else:
        half_len = win_length // 2
        low = long_repr & (((2**window) ** half_len) - 1)
        high = long_repr >> (window * half_len)
        return _long_to_coefs(low, window, n) + _long_to_coefs(high, window, n) 

And the results:

>>> import timeit
>>> def coefs_to_long2(coefs, window):
...     if len(coefs) < 100:
...         return coefs_to_long(coefs, window)
...     else:
...         half_index = len(coefs) // 2
...         big_window = window * half_index
...         least = coefs_to_long2(coefs[:half_index], window) 
...         up = coefs_to_long2(coefs[half_index:], window)
...         return least + (up << big_window)
... 
>>> coefs = [1, 2, 3, 1024, 256] * 567
>>> # original function
>>> timeit.timeit('coefs_to_long(coefs, 11)', 'from __main__ import coefs_to_long, coefs',
...               number=1000)/1000
0.003283214092254639
>>> timeit.timeit('coefs_to_long2(coefs, 11)', 'from __main__ import coefs_to_long2, coefs',
...               number=1000)/1000
0.0007998988628387451
>>> 0.003283214092254639 / _
4.104536516782767
>>> coefs = [2**64, 2**31, 10, 107] * 567
>>> timeit.timeit('coefs_to_long(coefs, 66)', 'from __main__ import coefs_to_long, coefs',...               number=1000)/1000

0.009775240898132325
>>> 
>>> timeit.timeit('coefs_to_long2(coefs, 66)', 'from __main__ import coefs_to_long2, coefs',
...               number=1000)/1000
0.0012255229949951173
>>> 
>>> 0.009775240898132325 / _
7.97638309362875

As you can see this version gives quite a speed up to the conversion, from 4 to 8 times faster(and bigger the input, bigger is the speed up). A similar result is obtained with the second function:

>>> import timeit
>>> def long_to_coefs2(long_repr, window, n):
...     win_length = long_repr.bit_length() // window
...     if win_length < 256:
...         return long_to_coefs(long_repr, window, n)
...     else:
...         half_len = win_length // 2
...         least = long_repr & (((2**window) ** half_len) - 1)
...         up = long_repr >> (window * half_len)
...         return long_to_coefs2(least, window, n) + long_to_coefs2(up, window, n)
... 
>>> long_repr = coefs_to_long([1,2,3,1024,512, 0, 3] * 456, 13)
>>> # original function
>>> timeit.timeit('long_to_coefs(long_repr, 13, 1025)', 'from __main__ import long_to_coefs, long_repr', number=1000)/1000
0.005114212036132813
>>> timeit.timeit('long_to_coefs2(long_repr, 13, 1025)', 'from __main__ import long_to_coefs2, long_repr', number=1000)/1000
0.001701267957687378
>>> 0.005114212036132813 / _
3.006117885794327
>>> long_repr = coefs_to_long([1,2**33,3**17,1024,512, 0, 3] * 456, 40)
>>> timeit.timeit('long_to_coefs(long_repr, 13, 1025)', 'from __main__ import long_to_coefs, long_repr', number=1000)/1000
0.04037192392349243
>>> timeit.timeit('long_to_coefs2(long_repr, 13, 1025)', 'from __main__ import long_to_coefs2, long_repr', number=1000)/1000
0.005722791910171509
>>> 0.04037192392349243 / _
7.0545853417694

I've tried to avoid more memory reallocation in the first function passing around the start and end indexes and avoiding slicing, but it turns out that this slows the function down quite much for small inputs and it's a just a bit slower for real-case inputs. Maybe I could try to mix them, even though I don't think I'll obtain much better results.


I've edited my question in the last period therefore some people gave me some advice with a different aim then what I required recently. I think it's important to clarify a bit the results pointed out by different sources in the comments and the answers, so that they can be useful for other people looking to implement fast polynomials and or AKS test.

  • As J.F. Sebastian pointed out the AKS algorithm receive many improvements, and so trying to implement an old version of the algorithm will always result in a very slow program. This does not exclude the fact that if you already have a good implementation of AKS you can speed it up improving the polynomials.
  • If you are interested in coefficients modulo a small n(read: word-size number) and you don't mind external dependencies,then go for numpy and use numpy.convolve or scipy.fftconvolve for the multiplication. It will be much faster than anything you can write. Unfortunately if n is not word size you can't use scipy.fftconvolve at all, and also numpy.convolve becomes slow as hell.
  • If you don't have to do modulo operations(on the coefficients and on the polynomial), then probably using ZBDDs is a good idea(as pointed out by harold), even though I do not promise spectacular results[even though I think it's really interesting and you ought to read Minato's paper].
  • If you don't have to do modulo operations on the coefficients then probably using an RNS representation, as stated by Origin, is a good idea. Then you can combine multiple numpy arrays to operate efficiently.
  • If you want a pure-python implementation of polynomials with coefficient modulo a big n, then my solution seems to be the fastest. Even though I did not try to implement fft multiplication between arrays of coefficients in python(which may be faster).
Tanana answered 21/10, 2012 at 12:2 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.