What are the advantages of NumPy over regular Python lists?
Asked Answered
C

7

568

What are the advantages of NumPy over regular Python lists?

I have approximately 100 financial markets series, and I am going to create a cube array of 100x100x100 = 1 million cells. I will be regressing (3-variable) each x with each y and z, to fill the array with standard errors.

I have heard that for "large matrices" I should use NumPy as opposed to Python lists, for performance and scalability reasons. Thing is, I know Python lists and they seem to work for me.

What will the benefits be if I move to NumPy?

What if I had 1000 series (that is, 1 billion floating point cells in the cube)?

Crossbred answered 14/6, 2009 at 23:2 Comment(0)
E
827

NumPy's arrays are more compact than Python lists -- a list of lists as you describe, in Python, would take at least 20 MB or so, while a NumPy 3D array with single-precision floats in the cells would fit in 4 MB. Access in reading and writing items is also faster with NumPy.

Maybe you don't care that much for just a million cells, but you definitely would for a billion cells -- neither approach would fit in a 32-bit architecture, but with 64-bit builds NumPy would get away with 4 GB or so, Python alone would need at least about 12 GB (lots of pointers which double in size) -- a much costlier piece of hardware!

The difference is mostly due to "indirectness" -- a Python list is an array of pointers to Python objects, at least 4 bytes per pointer plus 16 bytes for even the smallest Python object (4 for type pointer, 4 for reference count, 4 for value -- and the memory allocators rounds up to 16). A NumPy array is an array of uniform values -- single-precision numbers takes 4 bytes each, double-precision ones, 8 bytes. Less flexible, but you pay substantially for the flexibility of standard Python lists!

Elburr answered 14/6, 2009 at 23:16 Comment(8)
I've been trying to use "sys.getsizeof()" to compare the size of Python lists and NumPy arrays with the same number of elements and it doesn't seem to indicate that the NumPy arrays were that much smaller. Is this the case or is sys.getsizeof() having issues figuring out how big a NumPy array is?Harmful
@JackSimpson getsizeof isn't reliable. The documentation clearly states that: Only the memory consumption directly attributed to the object is accounted for, not the memory consumption of objects it refers to. This means that if you have nested python lists the size of the elements isn't taken into account.Whimsical
getsizeof on a list only tells you how much RAM the list object itself consumes and the RAM consumed by the pointers in its data array, it doesn't tell you how much RAM is consumed by the objects that those pointers refer to.Honewort
@AlexMartelli, could you please let me know where are you getting these numbers?Preindicate
Just a heads up, your estimate on the size of the equivalent Python list of list of lists is off. The 4 GB numpy array of C floats (4 bytes) would translate to something closer to 32 GB worth of lists and Python floats (which are actually C doubles), not 12 GB; each float on 64 bit Python occupies ~24 bytes (assuming no alignment losses in the allocator), plus another 8 bytes in the list to hold the reference (and that ignores the overallocation and object headers for the lists themselves, which might add another GB depending on exactly how much overallocation occurs).Depreciatory
You could get the Python list of list of lists down as low as 8 GB if all of the stored floats were reference to the same float, but given Python has no float caching, anything other than the same value over and over (not useful) would require you to manually implement interning for your floats to achieve that memory reduction, and it seems rather unlikely you'd have so few unique floats that interning would help out (since the intern cache itself would end up consuming a ton of memory eventually).Depreciatory
I will also note that if memory usage is the only concern, Python's built-in array module can store floats compactly already (though slicing won't make cheap views without explicit use of memoryview, where numpy slicing defaults to views). numpy's advantages are in using the data efficiently & conveniently, not just storing it efficiently.Depreciatory
Is using a pointer in numpy faster than using a list? For example, is array[0:104:52] faster than list[0][104][52].Melanism
S
268

NumPy is not just more efficient; it is also more convenient. You get a lot of vector and matrix operations for free, which sometimes allow one to avoid unnecessary work. And they are also efficiently implemented.

For example, you could read your cube directly from a file into an array:

x = numpy.fromfile(file=open("data"), dtype=float).reshape((100, 100, 100))

Sum along the second dimension:

s = x.sum(axis=1)

Find which cells are above a threshold:

(x > 0.5).nonzero()

Remove every even-indexed slice along the third dimension:

x[:, :, ::2]

Also, many useful libraries work with NumPy arrays. For example, statistical analysis and visualization libraries.

Even if you don't have performance problems, learning NumPy is worth the effort.

Shockheaded answered 14/6, 2009 at 23:38 Comment(0)
N
125

Alex mentioned memory efficiency, and Roberto mentions convenience, and these are both good points. For a few more ideas, I'll mention speed and functionality.

Functionality: You get a lot built in with NumPy, FFTs, convolutions, fast searching, basic statistics, linear algebra, histograms, etc. And really, who can live without FFTs?

Speed: Here's a test on doing a sum over a list and a NumPy array, showing that the sum on the NumPy array is 10x faster (in this test -- mileage may vary).

from numpy import arange
from timeit import Timer

Nelements = 10000
Ntimeits = 10000

x = arange(Nelements)
y = range(Nelements)

t_numpy = Timer("x.sum()", "from __main__ import x")
t_list = Timer("sum(y)", "from __main__ import y")
print("numpy: %.3e" % (t_numpy.timeit(Ntimeits)/Ntimeits,))
print("list:  %.3e" % (t_list.timeit(Ntimeits)/Ntimeits,))

which on my systems (while I'm running a backup) gives:

numpy: 3.004e-05
list:  5.363e-04
Nifty answered 15/6, 2009 at 4:59 Comment(0)
W
52

Here's a nice answer from the FAQ on the scipy.org website:

What advantages do NumPy arrays offer over (nested) Python lists?

Python’s lists are efficient general-purpose containers. They support (fairly) efficient insertion, deletion, appending, and concatenation, and Python’s list comprehensions make them easy to construct and manipulate. However, they have certain limitations: they don’t support “vectorized” operations like elementwise addition and multiplication, and the fact that they can contain objects of differing types mean that Python must store type information for every element, and must execute type dispatching code when operating on each element. This also means that very few list operations can be carried out by efficient C loops – each iteration would require type checks and other Python API bookkeeping.

Withy answered 11/9, 2014 at 2:35 Comment(0)
H
18

All have highlighted almost all major differences between numpy array and python list, I will just brief them out here:

  1. Numpy arrays have a fixed size at creation, unlike python lists (which can grow dynamically). Changing the size of ndarray will create a new array and delete the original.

  2. The elements in a Numpy array are all required to be of the same data type (we can have the heterogeneous type as well but that will not gonna permit you mathematical operations) and thus will be the same size in memory

  3. Numpy arrays are facilitated advances mathematical and other types of operations on large numbers of data. Typically such operations are executed more efficiently and with less code than is possible using pythons build in sequences

Hagberry answered 5/2, 2019 at 12:46 Comment(0)
C
3

The standard mutable multielement container in Python is the list. Because of Python's dynamic typing, we can even create heterogeneous list. To allow these flexible types, each item in the list must contain its own type info, reference count, and other information. That is, each item is a complete Python object. In the special case that all variables are of the same type, much of this information is redundant; it can be much more efficient to store data in a fixed-type array (NumPy-style). Fixed-type NumPy-style arrays lack this flexibility, but are much more efficient for storing and manipulating data.

Cervantez answered 25/2, 2022 at 2:40 Comment(0)
L
-2
  • NumPy is not another programming language but a Python extension module. It provides fast and efficient operations on arrays of homogeneous data. Numpy has fixed size of creation.
  • In Python :lists are written with square brackets. These lists can be homogeneous or heterogeneous
  • The main advantages of using Numpy Arrays Over Python Lists:
    1. It consumes less memory.
    2. Fast as compared to the python List.
    3. Convenient to use.
Lanugo answered 22/10, 2021 at 17:55 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.