I have an array of doubles, roughly 200,000 rows by 100 columns, and I'm looking for a fast algorithm to find the rows that contain sequences most similar to a given pattern (the pattern can be anywhere from 10 to 100 elements). I'm using python, so the brute force method (code below: looping over each row and starting column index, and computing the Euclidean distance at each point) takes around three minutes.
The numpy.correlate function promises to solve this problem much faster (running over the same dataset in less than 20 seconds). However, it simply computes a sliding dot product of the pattern over the full row, meaning that to compare similarity I'd have to normalize the results first. Normalizing the cross-correlation requires computing the standard deviation of each slice of the data, which instantly negates the speed improvement of using numpy.correlate in the first place.
Is it possible to compute normalized cross-correlation quickly in python? Or will I have to resort to coding the brute force method in C?
def norm_corr(x,y,mode='valid'):
ya=np.array(y)
slices=[x[pos:pos+len(y)] for pos in range(len(x)-len(y)+1)]
return [np.linalg.norm(np.array(z)-ya) for z in slices]
similarities=[norm_corr(arr,pointarray) for arr in arraytable]