I have a (large) length-N array of k distinct functions, and a length-N array of abcissa. I want to evaluate the functions at the abcissa to return a length-N array of ordinates, and critically, I need to do it very fast.
I have tried the following loop over a call to np.where, which is too slow:
Create some fake data to illustrate the problem:
def trivial_functional(i): return lambda x : i*x
k = 250
func_table = [trivial_functional(j) for j in range(k)]
func_table = np.array(func_table) # possibly unnecessary
We have a table of 250 distinct functions. Now I create a large array with many repeated entries of those functions, and a set of points of the same length at which these functions should be evaluated.
Npts = 1e6
abcissa_array = np.random.random(Npts)
function_indices = np.random.random_integers(0,len(func_table)-1,Npts)
func_array = func_table[function_indices]
Finally, loop over every function used by the data and evaluate it on the set of relevant points:
desired_output = np.zeros(Npts)
for func_index in set(function_indices):
idx = np.where(function_indices==func_index)[0]
desired_output[idx] = func_table[func_index](abcissa_array[idx])
This loop takes ~0.35 seconds on my laptop, the biggest bottleneck in my code by an order of magnitude.
Does anyone see how to avoid the blind lookup call to np.where? Is there a clever use of numba that can speed this loop up?
np.where
and use boolean indexing, i.e.idx = function_indices == func_index
and everything else stays the same. – Fanciedwhere
that is killing you. You need some sort of sort or groupby that can organize the indices once, and then give quick access in the loop. – Merwyn