Numpy: Replace every value in the array with the mean of its adjacent elements
Asked Answered
G

3

5

I have an ndarray, and I want to replace every value in the array with the mean of its adjacent elements. The code below can do the job, but it is super slow when I have 700 arrays all with shape (7000, 7000) , so I wonder if there are better ways to do it. Thanks!

a = np.array(([1,2,3,4,5,6,7,8,9],[4,5,6,7,8,9,10,11,12],[3,4,5,6,7,8,9,10,11]))
row,col = a.shape
new_arr = np.ndarray(a.shape)
for x in xrange(row):
    for y in xrange(col):
        min_x = max(0, x-1)
        min_y = max(0, y-1)
        new_arr[x][y] = a[min_x:(x+2),min_y:(y+2)].mean()
print new_arr
Genova answered 6/7, 2016 at 19:44 Comment(4)
It's not running slowly for me at all, nor does it look like it should run slowly.Cyperaceous
@EliSadoff it will if I have 700 arrays all with shape (7000,7000) ...Genova
I'm not sure how to do it in python but have you considered multi-threading or parallel processing? I know in C you can do this to speed up large data processing.Claiborn
You are not setting max_x and max_y correctly...Cage
P
12

Well, that's a smoothing operation in image processing, which can be achieved with 2D convolution. You are working a bit differently on the near-boundary elements. So, if the boundary elements are let off for precision, you can use scipy's convolve2d like so -

from scipy.signal import convolve2d as conv2

out = (conv2(a,np.ones((3,3)),'same')/9.0

This specific operation is a built-in in OpenCV module as cv2.blur and is very efficient at it. The name basically describes its operation of blurring the input arrays representing images. I believe the efficiency comes from the fact that internally its implemented entirely in C for performance with a thin Python wrapper to handle NumPy arrays.

So, the output could be alternatively calculated with it, like so -

import cv2 # Import OpenCV module

out = cv2.blur(a.astype(float),(3,3))

Here's a quick show-down on timings on a decently big image/array -

In [93]: a = np.random.randint(0,255,(5000,5000)) # Input array

In [94]: %timeit conv2(a,np.ones((3,3)),'same')/9.0
1 loops, best of 3: 2.74 s per loop

In [95]: %timeit cv2.blur(a.astype(float),(3,3))
1 loops, best of 3: 627 ms per loop
Philosopher answered 6/7, 2016 at 20:0 Comment(10)
Nicely done. I was about to write something like this. +1.Ullund
@Ullund Good to see you with NumPy tag! ;)Philosopher
@Philosopher It's on my feed :) I haven't been answering questions lately though... lots of stuff going on on my end.Ullund
uniform_filter is a short-cut for this... but since the signal module operates in the Fourier domain, I guess this would be faster?Traject
@ImanolLuengo Ah uniform_filter seems much faster! Post that as a solution!Philosopher
@Philosopher I guess.. if uniform_filter is faster it must be because it does two 1D convolutions (by separating the kernel). I will check something before :PTraject
@ImanolLuengo Yeah, I am not sure about the internals of it, could be that.Philosopher
@Philosopher just checked, internally it calls N times (one for dimension) to uniform_filter1d, which is probably why it gets the speedup over the signal module. For all other non-separable filters, your conv2d should always be faster.Traject
Thanks @ImanolLuengo and Divakar for the answers! One more question, if I want to change mean to max, do you know what commands can do this? There doesn't seem to have max option in convolve2d...Genova
@Genova Well there is max filter too in scipy : docs.scipy.org/doc/scipy-0.16.0/reference/generated/…Philosopher
T
4

Following the discussion with @Divakar, find bellow a comparison of different convolution methods present in scipy:

import numpy as np
from scipy import signal, ndimage

def conv2(A, size):
    return signal.convolve2d(A, np.ones((size, size)), mode='same') / float(size**2)

def fftconv(A, size):
    return signal.fftconvolve(A, np.ones((size, size)), mode='same') / float(size**2)

def uniform(A, size):
    return ndimage.uniform_filter(A, size, mode='constant')

All 3 methods return exactly the same value. However, note that uniform_filter has a parameter mode='constant', which indicates the boundary conditions of the filter, and constant == 0 is the same boundary condition that the Fourier domain (in the other 2 methods) is enforced. For different use cases you can change the boundary conditions.

Now some test matrices:

A = np.random.randn(1000, 1000)

And some timings:

%timeit conv2(A, 3)     # 33.8 ms per loop
%timeit fftconv(A, 3)   # 84.1 ms per loop
%timeit uniform(A, 3)   # 17.1 ms per loop

%timeit conv2(A, 5)     # 68.7 ms per loop
%timeit fftconv(A, 5)   # 92.8 ms per loop
%timeit uniform(A, 5)   # 17.1 ms per loop

%timeit conv2(A, 10)     # 210 ms per loop
%timeit fftconv(A, 10)   # 86 ms per loop
%timeit uniform(A, 10)   # 16.4 ms per loop

%timeit conv2(A, 30)     # 1.75 s per loop
%timeit fftconv(A, 30)   # 102 ms per loop
%timeit uniform(A, 30)   # 16.5 ms per loop

So in short, uniform_filter seems faster, and it because the convolution is separable in two 1D convolutons (similar to gaussian_filter which is also separable).

Other non-separable filters with different kernels are more likely to be faster using signal module (the one in @Divakar's) solution.

The speed of both fftconvolve and uniform_filter remains constant for different kernel sizes, while convolve2d gets slightly slower.

Traject answered 6/7, 2016 at 20:30 Comment(4)
Good findings! Gotta digest all that stuffs now.Philosopher
@Philosopher just found something quite interesting, for different small kernel sizes, the last 2 methods maintain constant execution time.Traject
I think I understood the uniform_filter's implementation with two passes to process this. That fftconvolve's internal implementation looks messy though. Thanks again for coming up with those useful findings!Philosopher
@Philosopher wikipedia actually explains pretty well what uniform_filter is doing (essentially same example)!Traject
F
0

I had a similar problem recently and had to find a different solution since I can't use scipy.

import numpy as np

a = np.random.randint(100, size=(7000,7000)) #Array of 7000 x 7000
row,col = a.shape

column_totals =  a.sum(axis=0) #Dump the sum of all columns into a single array

new_array = np.zeros([row,col]) #Create an receiving array

for i in range(row):
    #Resulting row = the value of all rows minus the orignal row, divided by the row number minus one. 
    new_array[i] = (column_totals - a[i]) / (row - 1)

print(new_array)
Fullmouthed answered 17/8, 2017 at 20:58 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.