Differences between MOG, MOG2, and GMG in OpenCV CV2?
Asked Answered
E

1

22

What's the difference between these 3 methods of background subtraction in OPenCV : MOG, MOG2, and GMG ?

Euromarket answered 21/10, 2015 at 18:3 Comment(0)
E
29

You can refer to this link.

Several algorithms were introduced for this purpose. OpenCV has implemented three such algorithms which is very easy to use. We will see them one-by-one.

BackgroundSubtractorMOG

It is a Gaussian Mixture-based Background/Foreground Segmentation Algorithm. It was introduced in the paper "An improved adaptive background mixture model for real-time tracking with shadow detection" by P. KadewTraKuPong and R. Bowden in 2001. It uses a method to model each background pixel by a mixture of K Gaussian distributions (K = 3 to 5). The weights of the mixture represent the time proportions that those colours stay in the scene. The probable background colours are the ones which stay longer and more static.

While coding, we need to create a background object using the function, cv2.createBackgroundSubtractorMOG(). It has some optional parameters like length of history, number of gaussian mixtures, threshold etc. It is all set to some default values. Then inside the video loop, use backgroundsubtractor.apply() method to get the foreground mask.

See a simple example below:

1 import numpy as np
2 import cv2
3 
4 cap = cv2.VideoCapture('vtest.avi')
5 
6 fgbg = cv2.createBackgroundSubtractorMOG()
7 
8 while(1):
9     ret, frame = cap.read()
10 
 11     fgmask = fgbg.apply(frame)
 12 
 13     cv2.imshow('frame',fgmask)
 14     k = cv2.waitKey(30) & 0xff
 15     if k == 27:
16         break
17 
18 cap.release()
19 cv2.destroyAllWindows()

( All the results are shown at the end for comparison).

BackgroundSubtractorMOG2

It is also a Gaussian Mixture-based Background/Foreground Segmentation Algorithm. It is based on two papers by Z.Zivkovic, "Improved adaptive Gausian mixture model for background subtraction" in 2004 and "Efficient Adaptive Density Estimation per Image Pixel for the Task of Background Subtraction" in 2006. One important feature of this algorithm is that it selects the appropriate number of gaussian distribution for each pixel. (Remember, in last case, we took a K gaussian distributions throughout the algorithm). It provides better adaptibility to varying scenes due illumination changes etc.

As in previous case, we have to create a background subtractor object. Here, you have an option of selecting whether shadow to be detected or not. If detectShadows = True (which is so by default), it detects and marks shadows, but decreases the speed. Shadows will be marked in gray color.

1 import numpy as np
2 import cv2
3 
4 cap = cv2.VideoCapture('vtest.avi')
5 
6 fgbg = cv2.createBackgroundSubtractorMOG2()
7 
8 while(1):
9     ret, frame = cap.read()
 10 
 11     fgmask = fgbg.apply(frame)
12 
13     cv2.imshow('frame',fgmask)
14     k = cv2.waitKey(30) & 0xff
15     if k == 27:
16         break
17 
18 cap.release()
19 cv2.destroyAllWindows()

(Results given at the end)

BackgroundSubtractorGMG

This algorithm combines statistical background image estimation and per-pixel Bayesian segmentation. It was introduced by Andrew B. Godbehere, Akihiro Matsukawa, Ken Goldberg in their paper "Visual Tracking of Human Visitors under Variable-Lighting Conditions for a Responsive Audio Art Installation" in 2012. As per the paper, the system ran a successful interactive audio art installation called “Are We There Yet?” from March 31 - July 31 2011 at the Contemporary Jewish Museum in San Francisco, California.

It uses first few (120 by default) frames for background modelling. It employs probabilistic foreground segmentation algorithm that identifies possible foreground objects using Bayesian inference. The estimates are adaptive; newer observations are more heavily weighted than old observations to accommodate variable illumination. Several morphological filtering operations like closing and opening are done to remove unwanted noise. You will get a black window during first few frames.

It would be better to apply morphological opening to the result to remove the noises.

1 import numpy as np
2 import cv2
3 
4 cap = cv2.VideoCapture('vtest.avi')
5 
6 kernel = cv2.getStructuringElement(cv2.MORPH_ELLIPSE,(3,3))
7 fgbg = cv2.createBackgroundSubtractorGMG()
8 
9 while(1):
10     ret, frame = cap.read()
11 
12     fgmask = fgbg.apply(frame)
13     fgmask = cv2.morphologyEx(fgmask, cv2.MORPH_OPEN, kernel)
14 
15     cv2.imshow('frame',fgmask)
16     k = cv2.waitKey(30) & 0xff
17     if k == 27:
18         break
19 
20 cap.release()
21 cv2.destroyAllWindows()

In newer versions of OpenCV GMG and MOG are available with the contrib (opencv-contrib-python==3.4.2.16) in the bgsegm sub-module:

cv2.bgsegm.createBackgroundSubtractorGMG()
cv2.bgsegm.createBackgroundSubtractorMOG()
Etching answered 12/3, 2016 at 4:3 Comment(1)
hi @Etching any way to show an image representing the backgoundsubtractor I created ? I mean an image that says is white for background and black for non background ? applying it to a full zeros or ones frame ??Ass

© 2022 - 2024 — McMap. All rights reserved.