OpenCV detect movement in python
Asked Answered
S

3

8

My goal is to detect movement in specific region on IP camera stream. I managed to write working code, but it's based on my personal understanding.

import cv2
import numpy as np
import os
import time
import datetime
import urllib
import pynotify

stream=urllib.urlopen('http://user:[email protected]/video.mjpg')
bytes=''
fgbg = cv2.createBackgroundSubtractorMOG2()

while True:
    bytes+=stream.read(16384)
    a = bytes.find('\xff\xd8')
    b = bytes.find('\xff\xd9')
    if a!=-1 and b!=-1:
        jpg = bytes[a:b+2]
        bytes= bytes[b+2:]
        img = cv2.imdecode(np.fromstring(jpg, dtype=np.uint8),cv2.IMREAD_COLOR)
        rows,cols,c = img.shape
        mask = np.zeros(img.shape, dtype=np.uint8)
        roi_corners = np.array([[(940,220),(1080,240), (1080,310), (940,290)]], dtype=np.int32)
        channel_count = img.shape[2]
        ignore_mask_color = (255,)*channel_count
        cv2.fillPoly(mask, roi_corners, ignore_mask_color)
        masked_image = cv2.bitwise_and(img, mask)

        fgmask = fgbg.apply(masked_image)
        iii = fgmask[220:310,940:1080]

        hist,bins = np.histogram(iii.ravel(),256,[0,256])

        black, white, cnt1, cnt2 = 0,0,0,0


        for i in range(0,127):
            black += hist[i]
            cnt1+=1
        bl = float(black / cnt1)

        for i in range(128,256):
            white += hist[i]
            cnt2+=1
        wh = float(white / cnt2)

        finalResult = ((bl+1) / (wh+1))/10

    if finalResult < 1.0:
        pynotify.init("cv2alert")
            notice = pynotify.Notification('Alert', 'Alert text')
            try:
                notice.show()
            except gio.Error:
                print "Error"

This code works, but as I don't understand histograms so well, I didn't managed to get values directly, but with some "hacks" like left side of histogram is black, right is white, and black / white gives the results I want. I know that this is not quite right, but it gives me the result of 4-9 when none is in ROI and result of 0.5-2.0 when someone enters this ROI.

My question here is: Is there some other way to read histogram and compare data, or some other method? Reading documentation does not helps me.

Scowl answered 9/11, 2016 at 19:24 Comment(3)
is the Region of interest predefined and stable? If I understand correctly, you are working with a Greyscale Picture/Video and you want to detect Motion such as "rapid Change of Pixel values" ?Triage
Yes, this region is predefined. I tried to extract values from histogram, because there is only two colors (black and white), but with no luck. So, going with first half of histogram for black and second for white. My question is, does there exists another method of achieving this? My main goal is to simply detect movement in certain region as simple as possible, and notify the user.Scowl
I think that there are nearly endless numbers of different methods. For example you could keep score of the last x frames and calculate a "mean" picture which then you use to compute a difference picture. If the difference in any region such as a n*m patch is higher than a threshold, you can call it a movement and report it. This could also be used to identify the region of movement without having to explicitily define a ROITriage
B
2

Differential Images are the result of the subtraction of two images

So differential image shows the difference between two images. With those images you can make movement visible.

In following script we use a differential image calculated from three consecutive images , and . The advantage of this is that the uninteresting background is removed from the result.

OpenCV offers the possibility to subtract two images from each other using absdiff(). Also logical operations on two images is already implemented. We use the method bitwise_and() to achieve the final differential image. In python it looks like this:

def diffImg(t0, t1, t2):
  d1 = cv2.absdiff(t2, t1)
  d2 = cv2.absdiff(t1, t0)
  return cv2.bitwise_and(d1, d2)

The last thing we have to do is bringing the differential image function into our previous script. Before the loop starts we read the first three images t_minus, t and t_plus and convert them into greyscale images as we dont need color information. With those images it is possible to start calculating differential images. After showing the differential image, we just have to get rid of the oldest image and read the next one. The final script looks like this:

import cv2

def diffImg(t0, t1, t2):
  d1 = cv2.absdiff(t2, t1)
  d2 = cv2.absdiff(t1, t0)
  return cv2.bitwise_and(d1, d2)

cam = cv2.VideoCapture(0)

winName = "Movement Indicator"
cv2.namedWindow(winName, cv2.WINDOW_AUTOSIZE)

# Read three images first:
t_minus = cv2.cvtColor(cam.read()[1], cv2.COLOR_RGB2GRAY)
t = cv2.cvtColor(cam.read()[1], cv2.COLOR_RGB2GRAY)
t_plus = cv2.cvtColor(cam.read()[1], cv2.COLOR_RGB2GRAY)

while True:
  cv2.imshow( winName, diffImg(t_minus, t, t_plus) )

  # Read next image
  t_minus = t
  t = t_plus
  t_plus = cv2.cvtColor(cam.read()[1], cv2.COLOR_RGB2GRAY)

  key = cv2.waitKey(10)
  if key == 27:
    cv2.destroyWindow(winName)
    break

print("Goodbye")

Here you will find more elaborative answer, for what you are looking for.

Bookkeeping answered 21/11, 2016 at 10:35 Comment(1)
After modifying Adrian's code a bit, and removed unnecessary things, I managed to make this work. Simple and effective with low impact on resources (1% CPU and 30 Mb of memory). Instead of showing images on screen, notifying works like a charm in line 69 of his script. Thanks for this. I think that this code can be optimized a bit for my purpose but this work for now.Scowl
E
4

One way to detect movement is to keep a running average of your scene using cv2.accumulateWeighted. Then, compare every new frame to the average using cv2.absdiff to get the image that indicates changes in the scene.

I did exactly this in a video processing project of mine. Check out the main loop in file diffavg1.py where I run the accumulator and perform the diff.

(The research of the project was to achieve realtime video processing utilizing multi-core CPU architecture, so the later versions diffavg2.py, diffavg3.py and diffavg4.py are progressively higher performance implementations, but the underlying accumulate-diff algorithm is the same.)

Euchromosome answered 16/11, 2016 at 17:43 Comment(1)
Nice one. I managed to get some values, but need little work for my problem. I noticed that CPU and Memory usage are lower than my custom script.Scowl
B
2

Differential Images are the result of the subtraction of two images

So differential image shows the difference between two images. With those images you can make movement visible.

In following script we use a differential image calculated from three consecutive images , and . The advantage of this is that the uninteresting background is removed from the result.

OpenCV offers the possibility to subtract two images from each other using absdiff(). Also logical operations on two images is already implemented. We use the method bitwise_and() to achieve the final differential image. In python it looks like this:

def diffImg(t0, t1, t2):
  d1 = cv2.absdiff(t2, t1)
  d2 = cv2.absdiff(t1, t0)
  return cv2.bitwise_and(d1, d2)

The last thing we have to do is bringing the differential image function into our previous script. Before the loop starts we read the first three images t_minus, t and t_plus and convert them into greyscale images as we dont need color information. With those images it is possible to start calculating differential images. After showing the differential image, we just have to get rid of the oldest image and read the next one. The final script looks like this:

import cv2

def diffImg(t0, t1, t2):
  d1 = cv2.absdiff(t2, t1)
  d2 = cv2.absdiff(t1, t0)
  return cv2.bitwise_and(d1, d2)

cam = cv2.VideoCapture(0)

winName = "Movement Indicator"
cv2.namedWindow(winName, cv2.WINDOW_AUTOSIZE)

# Read three images first:
t_minus = cv2.cvtColor(cam.read()[1], cv2.COLOR_RGB2GRAY)
t = cv2.cvtColor(cam.read()[1], cv2.COLOR_RGB2GRAY)
t_plus = cv2.cvtColor(cam.read()[1], cv2.COLOR_RGB2GRAY)

while True:
  cv2.imshow( winName, diffImg(t_minus, t, t_plus) )

  # Read next image
  t_minus = t
  t = t_plus
  t_plus = cv2.cvtColor(cam.read()[1], cv2.COLOR_RGB2GRAY)

  key = cv2.waitKey(10)
  if key == 27:
    cv2.destroyWindow(winName)
    break

print("Goodbye")

Here you will find more elaborative answer, for what you are looking for.

Bookkeeping answered 21/11, 2016 at 10:35 Comment(1)
After modifying Adrian's code a bit, and removed unnecessary things, I managed to make this work. Simple and effective with low impact on resources (1% CPU and 30 Mb of memory). Instead of showing images on screen, notifying works like a charm in line 69 of his script. Thanks for this. I think that this code can be optimized a bit for my purpose but this work for now.Scowl
U
-2

It can be done with ecapture.

Installation

pip install ecapture

Code

from ecapture import motion as md

md.motion_detect(0,"x")
print("detected")

This code will print

detected

once there is movement in the camera field of view

Unrestrained answered 8/5, 2019 at 16:22 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.