I found a way to optimize the conversions, even though I still hope that someone could help me improve them even more, and hopefully find some other clever idea.
Basically what's wrong with those functions is that they have some kind of quadratic memory allocation behaviour, when packing the integer, or when unpacking it.
(See this post of Guido van Rossum for an other example of this kind of behaviour).
After I realized this I've decided to give a try with the Divide et Impera principle, and I've obtained some results. I simply divide the array in two parts, convert them separately and eventually join the results(later I'll try to use an iterative version similar to the f5
in Rossum's post[edit: it doesn't seem to be much faster]).
The modified functions:
def _coefs_to_long(coefs, window):
"""Given a sequence of coefficients *coefs* and the *window* size return a
long-integer representation of these coefficients.
"""
length = len(coefs)
if length < 100:
res = 0
adder = 0
for k in coefs:
res += k << adder
adder += window
return res
else:
half_index = length // 2
big_window = window * half_index
low = _coefs_to_long(coefs[:half_index], window)
high = _coefs_to_long(coefs[half_index:], window)
return low + (high << big_window)
def _long_to_coefs(long_repr, window, n):
"""Given a long-integer representing coefficients of size *window*, return
the list of coefficients modulo *n*.
"""
win_length = long_repr.bit_length() // window
if win_length < 256:
mask = 2**window - 1
coefs = [0] * (long_repr.bit_length() // window + 1)
for i in xrange(len(coefs)):
coefs[i] = (long_repr & mask) % n
long_repr >>= window
# assure that the returned list is never empty, and hasn't got an extra 0.
if not coefs:
coefs.append(0)
elif not coefs[-1] and len(coefs) > 1:
coefs.pop()
return coefs
else:
half_len = win_length // 2
low = long_repr & (((2**window) ** half_len) - 1)
high = long_repr >> (window * half_len)
return _long_to_coefs(low, window, n) + _long_to_coefs(high, window, n)
And the results:
>>> import timeit
>>> def coefs_to_long2(coefs, window):
... if len(coefs) < 100:
... return coefs_to_long(coefs, window)
... else:
... half_index = len(coefs) // 2
... big_window = window * half_index
... least = coefs_to_long2(coefs[:half_index], window)
... up = coefs_to_long2(coefs[half_index:], window)
... return least + (up << big_window)
...
>>> coefs = [1, 2, 3, 1024, 256] * 567
>>> # original function
>>> timeit.timeit('coefs_to_long(coefs, 11)', 'from __main__ import coefs_to_long, coefs',
... number=1000)/1000
0.003283214092254639
>>> timeit.timeit('coefs_to_long2(coefs, 11)', 'from __main__ import coefs_to_long2, coefs',
... number=1000)/1000
0.0007998988628387451
>>> 0.003283214092254639 / _
4.104536516782767
>>> coefs = [2**64, 2**31, 10, 107] * 567
>>> timeit.timeit('coefs_to_long(coefs, 66)', 'from __main__ import coefs_to_long, coefs',... number=1000)/1000
0.009775240898132325
>>>
>>> timeit.timeit('coefs_to_long2(coefs, 66)', 'from __main__ import coefs_to_long2, coefs',
... number=1000)/1000
0.0012255229949951173
>>>
>>> 0.009775240898132325 / _
7.97638309362875
As you can see this version gives quite a speed up to the conversion, from 4
to 8
times faster(and bigger the input, bigger is the speed up).
A similar result is obtained with the second function:
>>> import timeit
>>> def long_to_coefs2(long_repr, window, n):
... win_length = long_repr.bit_length() // window
... if win_length < 256:
... return long_to_coefs(long_repr, window, n)
... else:
... half_len = win_length // 2
... least = long_repr & (((2**window) ** half_len) - 1)
... up = long_repr >> (window * half_len)
... return long_to_coefs2(least, window, n) + long_to_coefs2(up, window, n)
...
>>> long_repr = coefs_to_long([1,2,3,1024,512, 0, 3] * 456, 13)
>>> # original function
>>> timeit.timeit('long_to_coefs(long_repr, 13, 1025)', 'from __main__ import long_to_coefs, long_repr', number=1000)/1000
0.005114212036132813
>>> timeit.timeit('long_to_coefs2(long_repr, 13, 1025)', 'from __main__ import long_to_coefs2, long_repr', number=1000)/1000
0.001701267957687378
>>> 0.005114212036132813 / _
3.006117885794327
>>> long_repr = coefs_to_long([1,2**33,3**17,1024,512, 0, 3] * 456, 40)
>>> timeit.timeit('long_to_coefs(long_repr, 13, 1025)', 'from __main__ import long_to_coefs, long_repr', number=1000)/1000
0.04037192392349243
>>> timeit.timeit('long_to_coefs2(long_repr, 13, 1025)', 'from __main__ import long_to_coefs2, long_repr', number=1000)/1000
0.005722791910171509
>>> 0.04037192392349243 / _
7.0545853417694
I've tried to avoid more memory reallocation in the first function passing around the start and end indexes and avoiding slicing, but it turns out that this slows the function down quite much for small inputs and it's a just a bit slower for real-case inputs.
Maybe I could try to mix them, even though I don't think I'll obtain much better results.
I've edited my question in the last period therefore some people gave me some advice with a different aim then what I required recently. I think it's important to clarify a bit the results pointed out by different sources in the comments and the answers, so that they can be useful for other people looking to implement fast polynomials and or AKS test.
- As J.F. Sebastian pointed out the AKS algorithm receive many improvements, and so trying to implement an old version of the algorithm will always result in a very slow program. This does not exclude the fact that if you already have a good implementation of AKS you can speed it up improving the polynomials.
- If you are interested in coefficients modulo a small
n
(read: word-size number) and you don't mind external dependencies,then go for numpy
and use numpy.convolve
or scipy.fftconvolve
for the multiplication. It will be much faster than anything you can write. Unfortunately if n
is not word size you can't use scipy.fftconvolve
at all, and also numpy.convolve
becomes slow as hell.
- If you don't have to do modulo operations(on the coefficients and on the polynomial), then probably using ZBDDs is a good idea(as pointed out by harold), even though I do not promise spectacular results[even though I think it's really interesting and you ought to read Minato's paper].
- If you don't have to do modulo operations on the coefficients then probably using an RNS representation, as stated by Origin, is a good idea. Then you can combine multiple
numpy
arrays to operate efficiently.
- If you want a pure-python implementation of polynomials with coefficient modulo a big
n
, then my solution seems to be the fastest. Even though I did not try to implement fft multiplication between arrays of coefficients in python(which may be faster).
numpy
andnumpy.convolve
for multiplication(and it's written in my question), but, it's actually slower than this implementation[take into account that I need to work with big integers, also for coefficients. With word-size integers numpy is a lot faster.]. @harold: I'll now try to see. – Tanana