I've integrated an IP camera with OpenCV in Python to get the video processing done frame by frame from the live stream. I've configured camera FPS as 1 second so that I can get 1 frame per second in the buffer to process, but my algorithm takes 4 seconds to process each frame, causing stagnation of unprocessed frame in the buffer, that keeps growing by time & causing exponentially delay. To sort this out, I have created one more Thread where I'm calling cv2.grab() API to clean the buffer, it moves pointer towards latest frame in each call. In main Thread, I'm calling retrieve() method which gives me the last frame grabbed by the first Thread. By this design, frame stagnation problem is fixed and exponential delay is removed, but still constant delay of 12-13 seconds could not be removed. I suspect when cv2.retrieve() is called its not getting the latest frame, but the 4th or 5th frame from the latest frame. Is there any API in OpenCV or any other design pattern to get this problem fixed so that I can get the latest frame to process.
OpenCV-Python: How to get latest frame from the live video stream or skip old ones
Asked Answered
Why do you want a big buffer when your algorithm consumes at slower rate than information is produced. My suggestion would be to use buffer with only two image slots. One of writing from camera(write buffer, one image only) and other reading for processing(read buffer, one image only). Overwrite the write buffer on new image from camera. –
Maness
@Maness can you please tell how to reduce buffersize? I tried "video.set(cv2.CAP_PROP_BUFFERSIZE, 1)" on my Raspberry Pi with Ubuntu 16.04. It resulted in a message saying "VIDEOIO ERROR: V4L2: setting property #38 is not supported True" –
Nonsense
There are some good answers with detailed explanations (and workarounds) in c++ - OpenCV VideoCapture lag due to the capture buffer - Stack Overflow; however the answers are in C++ and you have to port it to Python. –
Christianna
If you don't mind compromising on speed. you can create a python generator which opens camera and returns frame.
def ReadCamera(Camera):
while True:
cap = cv2.VideoCapture(Camera)
(grabbed, frame) = cap.read()
if grabbed == True:
yield frame
Now when you want to process the frame.
for frame in ReadCamera(Camera):
.....
This works perfectly fine. except opening and closing camera will add up to time.
The best way to achieve this is using thread, here is my code to do that.
"""
This module contains the Streamer class, which is responsible for streaming the video from the RTSP camera.
Capture the video from the RTSP camera and store it in the queue.
NOTE:
You can preprocess the data before flow from here
"""
import cv2
from queue import Queue
import time
from env import RESOLUTION_X, RESOLUTION_Y,FPS
from threading import Thread
class Streamer:
def __init__(self,rtsp):
"""
Initialize the Streamer object, which is responsible for streaming the video from the RTSP camera.
stream (cv2.VideoCapture): The VideoCapture object.
rtsp (str): The RTSP url.
Q (Queue): The queue to store the frame.
running (bool): The flag to indicate whether the Streamer is running or not.
Args:
rtsp (str): The RTSP url.
"""
print("Creating Streamer object for",rtsp)
self.stream = cv2.VideoCapture(rtsp)
self.rtsp = rtsp
#bufferless VideoCapture
# self.stream.set(cv2.CAP_PROP_BUFFERSIZE, 1)
# self.stream.set(cv2.CAP_PROP_FPS, 10)
self.stream.set(cv2.CAP_PROP_FRAME_WIDTH, RESOLUTION_X)
self.stream.set(cv2.CAP_PROP_FRAME_HEIGHT, RESOLUTION_Y)
self.Q = Queue(maxsize=2)
self.running = True
print("Streamer object created for",rtsp)
def info(self):
"""
Print the information of the Streamer.
"""
print("==============================Stream Info==============================")
print("| Stream:",self.rtsp,"|")
print("| Queue Size:",self.Q.qsize(),"|")
print("| Running:",self.running,"|")
print("======================================================================")
def get_processed_frame(self):
"""
Get the processed frame from the Streamer.
Returns:
dict: The dictionary containing the frame and the time.
"""
if self.Q.empty():
return None
return self.Q.queue[0]
def release(self):
"""
Release the Streamer.
"""
self.stream.release()
def stop(self):
"""
Stop the Streamer.
"""
print("Stopping",self.stream,"Status",self.rtsp)
self.running = False
def start(self):
"""
Start the Streamer.
"""
print("Starting streamer",self.stream, "Status",self.running)
while self.running:
# FOR VIDEO CAPTURE and TESTING FRAME BY FRAME REMOVE THIS COMMENT
# while self.Q.full():
# time.sleep(0.00001)
ret, frame = self.stream.read()
# print(frame,ret)
if not ret:
print("NO Frame for",self.rtsp)
continue
frame =cv2.resize(frame,(RESOLUTION_X,RESOLUTION_Y))
# exit()
if not self.Q.full():
print("Streamer PUT",self.Q.qsize())
self.Q.put({"frame":frame,"time":time.time()})
print("Streamer PUT END",self.Q.qsize())
# exit()
# time.sleep(1/FPS)
self.release()
if __name__ == "__main__":
streamer = Streamer("rtsp://localhost:8554/105")
thread = Thread(target=streamer.start)
thread.start()
while streamer.running:
data = streamer.get_processed_frame()
if data is None:
continue
frame = data["frame"]
cv2.imshow("frame",frame)
cv2.waitKey(1)
© 2022 - 2024 — McMap. All rights reserved.