Vehicle segmentation and tracking
Asked Answered
K

2

4

I've been working on a project for some time, to detect and track (moving) vehicles in video captured from UAV's, currently I am using an SVM trained on bag-of-feature representations of local features extracted from vehicle and background images. I am then using a sliding window detection approach to try and localise vehicles in the images, which I would then like to track. The problem is that this approach is far to slow and my detector isn't as reliable as I would like so I'm getting quite a few false positives.

So I have been considering attempting to segment the cars from the background to find the approximate position so to reduce the search space before applying my classifier, but I am not sure how to go about this, and was hoping someone could help?

Additionally, I have been reading about motion segmentation with layers, using optical flow to segment the frame by flow model, does anyone have any experience with this method, if so could you offer some input to as whether you think this method would be applicable for my problem.

Below is two frames from a sample video

frame 0: enter image description here

frame 5: enter image description here

Knuth answered 13/3, 2013 at 17:25 Comment(0)
L
11

Assumimg your cars are moving, you could try to estimate the ground plane (road).

You may get a descent ground plane estimate by extracting features (SURF rather than SIFT, for speed), matching them over frame pairs, and solving for a homography using RANSAC, since plane in 3d moves according to a homography between two camera frames.

Once you have your ground plane you can identify the cars by looking at clusters of pixels that don't move according to the estimated homography.

A more sophisticated approach would be to do Structure from Motion on the terrain. This only presupposes that it is rigid, and not that it it planar.


Update

I was wondering if you could expand on how you would go about looking for clusters of pixels that don't move according to the estimated homography?

Sure. Say I and K are two video frames and H is the homography mapping features in I to features in K. First you warp I onto K according to H, i.e. you compute the warped image Iw as Iw( [x y]' )=I( inv(H)[x y]' ) (roughly Matlab notation). Then you look at the squared or absolute difference image Diff=(Iw-K)*(Iw-K). Image content that moves according to the homography H should give small differences (assuming constant illumination and exposure between the images). Image content that violates H such as moving cars should stand out.

For clustering high-error pixel groups in Diff I would start with simple thresholding ("every pixel difference in Diff larger than X is relevant", maybe using an adaptive threshold). The thresholded image can be cleaned up with morphological operations (dilation, erosion) and clustered with connected components. This may be too simplistic, but its easy to implement for a first try, and it should be fast. For something more fancy look at Clustering in Wikipedia. A 2D Gaussian Mixture Model may be interesting; when you initialize it with the detection result from the previous frame it should be pretty fast.

I did a little experiment with the two frames you provided, and I have to say I am somewhat surprised myself how well it works. :-) Left image: Difference (color coded) between the two frames you posted. Right image: Difference between the frames after matching them with a homography. The remaining differences clearly are the moving cars, and they are sufficiently strong for simple thresholding.

Frame differences before and after image alignment

Thinking of the approach you currently use, it may be intersting combining it with my proposal:

  • You could try to learn and classify the cars in the difference image D instead of the original image. This would amount to learning what a car motion pattern looks like rather than what a car looks like, which could be more reliable.
  • You could get rid of the expensive window search and run the classifier only on regions of D with sufficiently high value.

Some additional remarks:

  • In theory, the cars should even stand out if they are not moving since they are not flat, but given your distance to the scene and camera resolution this effect may be too subtle.
  • You can replace the feature extraction / matching part of my proposal with Optical Flow, if you like. This amounts to identifying flow vectors that "stick out" from a consistent frame-to-frame motion of the ground. It may be prone to outliers in the optical flow, however. You can also try to get the homography from the flow vectors.
  • This is important: Regardless of which method you use, once you have found cars in one frame you should use this information to robustify your search of these cars in consecutive frame, giving a higher likelyhood to detections close to the old ones (Kalman filter, etc). That's what tracking is all about!
Legionary answered 13/3, 2013 at 18:2 Comment(4)
Hi DCS thanks for your answer, I was wondering if you could expand on how you would go about looking for clusters of pixels that don't move according to the estimated homography?Knuth
@JonoBrogan: I did an experiment with your frames, and it works nicely, see above.Legionary
Thank you so much for your detailed reply and your results look great, exactly what I was trying to achieve. I'll give this a try now :)Knuth
Maybe you could match the two images without Homography (projective transformation), but by using a simpler transformation, even pure translation. After all, the differences between two consecutive frames should not be too large, only very slight camera movements.Unmistakable
J
1
  1. If the number of cars in your field of view always remain the same but move around then you can use optical flow...it will give you good results against a still background...if the number of cars are changing then you need to call goodFeaturestoTrack function in OpenCV after certain number of frames and again track the cars using optical flow.
  2. You can use background modelling to model the background and hence the cars are always your foreground.The simplest example is frame differentiation...subtract the previous frame current frame. diff(x,y,k) = I(x,y,k) - I(x,y,k-1) .As your cars are moving in each frame you will get their position..
  3. Both the process will work fine since you have a still background I presume..check this link to find what Optical flow can do.
Juarez answered 14/3, 2013 at 1:49 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.