Speeding up vectorized eye-tracking algorithm in numpy
Asked Answered
S

1

4

I'm trying to implement Fabian Timm's eye-tracking algorithm [http://www.inb.uni-luebeck.de/publikationen/pdfs/TiBa11b.pdf] (found here: [http://thume.ca/projects/2012/11/04/simple-accurate-eye-center-tracking-in-opencv/]) in numpy and OpenCV and I've hit a snag. I think I've vectorized my implementation decently enough, but it's still not fast enough to run in real time and it doesn't detect pupils with as much accuracy as I had hoped. This is my first time using numpy, so I'm not sure what I've done wrong.

def find_pupil(eye):
    eye_len = np.arange(eye.shape[0])
    xx,yy = np.meshgrid(eye_len,eye_len) #coordinates
    XX,YY = np.meshgrid(xx.ravel(),yy.ravel()) #all distance vectors
    Dx,Dy = [YY-XX, YY-XX] #y2-y1, x2-x1 -- simpler this way because YY = XXT
    Dlen = np.sqrt(Dx**2+Dy**2)
    Dx,Dy = [Dx/Dlen, Dy/Dlen] #normalized

    Gx,Gy = np.gradient(eye)
    Gmagn = np.sqrt(Gx**2+Gy**2)

    Gx,Gy = [Gx/Gmagn,Gy/Gmagn] #normalized
    GX,GY = np.meshgrid(Gx.ravel(),Gy.ravel())

    X = (GX*Dx+GY*Dy)**2
    eye = cv2.bitwise_not(cv2.GaussianBlur(eye,(5,5),0.005*eye.shape[1])) #inverting and blurring eye for use as w
    eyem = np.repeat(eye.ravel()[np.newaxis,:],eye.size,0)
    C = (np.nansum(eyem*X, axis=0)/eye.size).reshape(eye.shape)

    return np.unravel_index(C.argmax(), C.shape)

and the rest of the code:

def find_eyes(face):
    left_x, left_y = [int(floor(0.5 * face.shape[0])), int(floor(0.2 * face.shape[1]))]
    right_x, right_y = [int(floor(0.1 * face.shape[0])), int(floor(0.2 * face.shape[1]))]
    area = int(floor(0.2 * face.shape[0]))
    left_eye = (left_x, left_y, area, area)
    right_eye = (right_x, right_y, area, area)

    return [left_eye,right_eye]



faceCascade = cv2.CascadeClassifier("haarcascade_frontalface_default.xml")
video_capture = cv2.VideoCapture(0)

while True:
    # Capture frame-by-frame
    ret, frame = video_capture.read()

    gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)

    faces = faceCascade.detectMultiScale(
        gray,
        scaleFactor=1.1,
        minNeighbors=5,
        minSize=(30, 30),
        flags=cv2.CASCADE_SCALE_IMAGE
    )

    # Draw a rectangle around the faces
    for (x, y, w, h) in faces:
        cv2.rectangle(frame, (x, y), (x+w, y+h), (0, 255, 0), 2)
        roi_gray = gray[y:y+h, x:x+w]
        roi_color = frame[y:y+h, x:x+w]
        eyes = find_eyes(roi_gray)
        for (ex,ey,ew,eh) in eyes:
            eye_gray = roi_gray[ey:ey+eh,ex:ex+ew]
            eye_color = roi_color[ey:ey+eh,ex:ex+ew]
            cv2.rectangle(roi_color,(ex,ey),(ex+ew,ey+eh),(255,0,0),2)
            px,py = find_pupil(eye_gray)
            cv2.rectangle(eye_color,(px,py),(px+1,py+1),(255,0,0),2)

    # Display the resulting frame
    cv2.imshow('Video', frame)

    if cv2.waitKey(1) & 0xFF == ord('q'):
        break

# When everything is done, release the capture
video_capture.release()
cv2.destroyAllWindows()
Solange answered 14/3, 2016 at 19:23 Comment(5)
The first step is always to profile your code (e.g. using line_profiler). Find out which lines it spends most of its time on, then focus on optimizing those. It would be much easier to offer help if you turned your code into an MCVE - we can't really assess accuracy or performance if we don't have the input data necessary to run your code.Spleen
We also don't have any context for that function. If it's being called repeatedly (e.g. in a loop) then you might be unnecessarily recomputing many local variables that don't change from call to call. However, we can't easily tell what changes and what doesn't without having access to some example input data.Spleen
Thanks, Ali. I'll profile my code. It's difficult for me to make my code self contained, however, without copying the rest of it. All I can say is that face is a square image of a face, captured on webcam, and x,y,w,h are the top left corner and dimensions of a square surrounding one of the eyes. Can I link a Github repo?Solange
It's not ideal, but a link to a github repo is better than nothing. Random input data, as in Divakar's answer, can give some indication of performance, but of course it tells us nothing about accuracy.Spleen
I added the rest of my code instead. It wasn't much anyway.Solange
M
2

You can perform many of those operations that save replicated elements and then perform some mathematical opertaions by directly performing the mathematical operatrions after creating singleton dimensions that would allow NumPy broadcasting. Thus, there would be two benefits - On the fly operations to save workspace memory and performance boost. Also, at the end, we can replace the nansum calculation with a simplified version. Thus, with all of that philosophy in mind, here's one modified approach -

def find_pupil_v2(face, x, y, w, h):    
    eye = face[x:x+w,y:y+h]
    eye_len = np.arange(eye.shape[0])

    N = eye_len.size**2
    eye_len_diff = eye_len[:,None] - eye_len
    Dlen = np.sqrt(2*((eye_len_diff)**2))
    Dxy0 = eye_len_diff/Dlen 

    Gx0,Gy0 = np.gradient(eye)
    Gmagn = np.sqrt(Gx0**2+Gy0**2)
    Gx,Gy = [Gx0/Gmagn,Gy0/Gmagn] #normalized

    B0 = Gy[:,:,None]*Dxy0[:,None,:]
    C0 = Gx[:,None,:]*Dxy0
    X = ((C0.transpose(1,0,2)[:,None,:,:]+B0[:,:,None,:]).reshape(N,N))**2

    eye1 = cv2.bitwise_not(cv2.GaussianBlur(eye,(5,5),0.005*eye.shape[1]))
    C = (np.nansum(X,0)*eye1.ravel()/eye1.size).reshape(eye1.shape)

    return np.unravel_index(C.argmax(), C.shape)

There's one repeat still left in it at Dxy. It might be possible to avoid that step and Dxy0 could be fed directly into the step that uses Dxy to give us X, but I haven't worked through it. Everything's converted to broadcasting based!

Runtime test and output verification -

In [539]: # Inputs with random elements
     ...: face = np.random.randint(0,10,(256,256)).astype('uint8')
     ...: x = 40
     ...: y = 60
     ...: w = 64
     ...: h = 64
     ...: 

In [540]: find_pupil(face,x,y,w,h)
Out[540]: (32, 63)

In [541]: find_pupil_v2(face,x,y,w,h)
Out[541]: (32, 63)

In [542]: %timeit find_pupil(face,x,y,w,h)
1 loops, best of 3: 4.15 s per loop

In [543]: %timeit find_pupil_v2(face,x,y,w,h)
1 loops, best of 3: 529 ms per loop

It seems we are getting close to 8x speedup!

Myology answered 14/3, 2016 at 21:31 Comment(5)
Thank you so much, Divakar! This sped up my code significantly. Could you shed some light regarding what's going on under the hood here?Solange
@Solange Glad to help! Still curious though, how much of speedup are you getting with your actual use case? Also, if this solved your case, please consider accepting the solution by clicking on the hollow checkmark next to the solution. Read more about it here - meta.stackexchange.com/questions/5234/…Myology
I'm noticing about double to triple the speed I was getting before. It's much closer to real time. The funny thing is that timeit doesn't notice the speedup. I'm still having issues with accuracy, however. If this is as fast as my implementation can get, I'll be happy, but I'd still prefer faster.Solange
@Solange Could you elaborate on the accuracy issue? Like how much of deviation, etc.?Myology
For one of my eyes, it lands on the pupil about 70% of the time, and for the other, it's always low and to the left. I edited my original post with all of my code if you'd like to try for yourself.Solange

© 2022 - 2024 — McMap. All rights reserved.