Why numpy is 'slow' by itself?
Asked Answered
F

4

6

Given the thread here

It seems that numpy is not the most ideal for ultra fast calculation. Does anyone know what overhead we must be aware of when using numpy for numerical calculation?

Femi answered 28/4, 2010 at 16:3 Comment(0)
G
12

Well, depends on what you want to do. XOR is, for instance, hardly relevant for someone interested in doing numerical linear algebra (for which numpy is pretty fast, by virtue of using optimized BLAS/LAPACK libraries underneath).

Generally, the big idea behind getting good performance from numpy is to amortize the cost of the interpreter over many elements at a time. In other words, move the loops from python code (slow) into C/Fortran loops somewhere in the numpy/BLAS/LAPACK/etc. internals (fast). If you succeed in that operation (called vectorization) performance will usually be quite good.

Of course, you can obviously get even better performance by dumping the python interpreter and using, say, C++ instead. Whether this approach actually succeeds or not depends on how good you are at high performance programming with C++ vs. numpy, and what operation exactly you're trying to do.

Glyptodont answered 28/4, 2010 at 16:3 Comment(1)
I agree that once the data is passed to fortran side, it is fast. I am more interested in the python/compiled code interface overhead. say the line a = sin(x) the data went through a round trip from python to C. I want to know how many layers of overhead it has gone through and if porting this to cython would do much better job.Femi
D
1

Any time you have an expression like x = a * b + c / d + e, you end up with one temporary array for a * b, one temporary array for c / d, one for one of the sums and finally one allocation for the result. This is a limitation of Python types and operator overloading. You can however do things in-place explicitly using the augmented assignment (*=, +=, etc.) operators and be assured that copies aren't made.

As for the specific reason NumPy performs more slowly in that benchmark, it's hard to tell but it probably has to do with the constant overhead of checking sizes, type-marshaling, etc. that Cython/etc. don't have to worry about. On larger problems you'd probably see it get closer.

Dallasdalli answered 28/4, 2010 at 16:3 Comment(0)
B
0

Your sub-question: a = sin(x), how many roundtrips are there.

The trick is to pass a numpy array to sin(x), then there is only one 'roundtrip' for the whole array, since numpy will return an array of sin-values. There is no python for loop involved in this operation.

Bichromate answered 28/4, 2010 at 16:3 Comment(0)
V
0

I can't really tell, but I'd guess there are two factors:

  1. Perhaps numpy is copying more stuff? weave is often faster when you avoid allocating big temporary arrays, but this shouldn't matter here.

  2. numpy has a bit of overhead used in iterating over (possibly) multidimensional arrays. This overhead would normally be dwarfed by number crunching, but an xor is really really fast, so all that really matters is the overhead.

Valiant answered 28/4, 2010 at 16:3 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.