In python, which one is faster ?
numpy.max(), numpy.min()
or
max(), min()
My list/array length varies from 2 to 600. Which one should I use to save some run time ?
In python, which one is faster ?
numpy.max(), numpy.min()
or
max(), min()
My list/array length varies from 2 to 600. Which one should I use to save some run time ?
Well from my timings it follows if you already have numpy array a
you should use a.max
(the source tells it's the same as np.max
if a.max
available). But if you have built-in list then most of the time takes converting it into np.ndarray => that's why max
is better in your timings.
In essense: if np.ndarray
then a.max
, if list
and no need for all the machinery of np.ndarray
then standard max
.
I was also interested in this and tested the three variants with perfplot (a little project of mine). Result: You're not going wrong with a.max()
.
Code to reproduce the plot:
import numpy as np
import perfplot
b = perfplot.bench(
setup=np.random.rand,
kernels=[max, np.max, lambda a: a.max()],
labels=["max(a)", "np.max(a)", "a.max()"],
n_range=[2 ** k for k in range(25)],
xlabel="len(a)",
)
b.show()
It's probably best if you use something like the Python timeit module to test it for yourself. That way you can test your own data in your own environment, rather than relying on third parties with various test data and environments which aren't necessarily representative of yours.
numpy.min
and numpy.max
have slightly different semantics (and call signatures) to the builtins, so the choice shouldn't be to do with speed. Use the numpy versions if you need to be able to handle multidimensional data sanely. If you're just using Python lists or other things that don't know about dimensionality, use the builtins.
© 2022 - 2024 — McMap. All rights reserved.
list
, I'd use vanillamax
. If they are in a numpyarray
I'd usenumpy.max
. Converting a list to a numpy array is a pretty expensive operation – Communicant