OpenCV VideoCapture lag due to the capture buffer
Asked Answered
R

6

43

I am capturing video through a webcam which gives a mjpeg stream. I did the video capture in a worker thread. I start the capture like this:

const std::string videoStreamAddress = "http://192.168.1.173:80/live/0/mjpeg.jpg?x.mjpeg";
qDebug() << "start";
cap.open(videoStreamAddress);
qDebug() << "really started";
cap.set(CV_CAP_PROP_FRAME_WIDTH, 720);
cap.set(CV_CAP_PROP_FRAME_HEIGHT, 576);

the camera is feeding the stream at 20fps. But if I did the reading in 20fps like this:

if (!cap.isOpened()) return;

        Mat frame;
        cap >> frame; // get a new frame from camera
        mutex.lock();

        m_imageFrame = frame;
        mutex.unlock();

Then there is a 3+ seconds lag. The reason is that the captured video is first stored in a buffer.When I first start the camera, the buffer is accumulated but I did not read the frames out. So If I read from the buffer it always gives me the old frames. The only solutions I have now is to read the buffer at 30fps so it will clean the buffer quickly and there's no more serious lag.

Is there any other possible solution so that I could clean/flush the buffer manually each time I start the camera?

Radii answered 4/5, 2015 at 13:59 Comment(6)
Why do you want to limit to 20fps? Are you waiting in the worker thread?Account
is that buffer your own one or something within cv::VideoCapture?Rosales
@mirosval,yes, I did so because I don't want too much cpu...Radii
video_capture.set(cv2.CAP_PROP_POS_FRAMES, 0) before every video_capture.read() call helps me to get the latest frames from a USB camera with Python 3, OpenCV 4.2 and GStreamer. Whereas CAP_PROP_BUFFERSIZE gives a GStreamer unhandled property warningSubjoinder
Setting video_capture.set(cv2.CAP_PROP_POS_FRAMES,0) before every video_capture.read() actually made my video stream lag even more...Anklebone
RTSP-specific question (with some workarounds): python - IP Camera Capture RTSP stream big latency OPENCV - Stack OverflowAmory
A
49

OpenCV Solution

According to this source, you can set the buffersize of a cv::VideoCapture object.

cv::VideoCapture cap;
cap.set(CV_CAP_PROP_BUFFERSIZE, 3); // internal buffer will now store only 3 frames

// rest of your code...

There is an important limitation however:

CV_CAP_PROP_BUFFERSIZE Amount of frames stored in internal buffer memory (note: only supported by DC1394 v 2.x backend currently)

Update from comments. In newer versions of OpenCV (3.4+), the limitation seems to be gone and the code uses scoped enumerations:

cv::VideoCapture cap;
cap.set(cv::CAP_PROP_BUFFERSIZE, 3);

Hackaround 1

If the solution does not work, take a look at this post that explains how to hack around the issue.

In a nutshell: the time needed to query a frame is measured; if it is too low, it means the frame was read from the buffer and can be discarded. Continue querying frames until the time measured exceeds a certain limit. When this happens, the buffer was empty and the returned frame is up to date.

(The answer on the linked post shows: returning a frame from the buffer takes about 1/8th the time of returning an up to date frame. Your mileage may vary, of course!)


Hackaround 2

A different solution, inspired by this post, is to create a third thread that grabs frames continuously at high speed to keep the buffer empty. This thread should use the cv::VideoCapture.grab() to avoid overhead.

You could use a simple spin-lock to synchronize reading frames between the real worker thread and the third thread.

Advert answered 4/5, 2015 at 14:39 Comment(11)
I actually wonder is there any way to tell me whether the buffer is empty instead of measuring the time. It is quite inconvenient...Radii
The cv::VideoCapture interface does not allow you to acquire that information. Another solution is creating a different thread that continuously grabs frames (with the cv::VideoCapture.grab() function) at a high speed. This will ensure the buffer to be empty when the real worker thread reads the next frame (and don't forget to synchronize those threads when reading frames, of course).Advert
Thanks, that's what I am doing now.Radii
Unfortunately looks like that constant is not in Python opencv: run [thing for thing in dir(cv) if thing.find("CAP_")>-1 ]Freed
What does "only supported by DC1394 v 2.x backend" mean? Is that a type of camera?Ramin
@bakalolo It is a high level API; for more information, take a look at the FAQ on the website.Advert
the second link is deadNewscast
How to setup/install DC1394 v 2.x backend for python-opvencv version 4.1.2 in Ubuntu 18.04?Grondin
CAP_PROP_BUFFERSIZE seems to be working in cv2.CAP_V4L2 backend ...Nodus
Since CAP_PROP_BUFFERSIZE is related to backend dc1394 (which is a camera capturing backend), does it only affect reading from webcams directly? If I am receiving a h264 stream over udp, this won't help at all right? Is there any other buffering property that I should look into for that use case?Anklebone
Apart from setting CAP_PROP_BUFFERSIZE to 1, I also need to call grab() 2 times before read() to get the most updated frame. It causes little to no cpu/time overhead for me.Alternately
A
6

Guys this is pretty stupid and nasty solution, but accepted answer didn't helped me for some reasons. (Code in python but the essence pretty clear)

# vcap.set(cv2.CAP_PROP_BUFFERSIZE, 1)
data = np.zeros((1140, 2560))
image = plt.imshow(data)

while True:
    vcap = cv2.VideoCapture("rtsp://admin:@192.168.3.231")
    ret, frame = vcap.read()
    image.set_data(frame)
    plt.pause(0.5) # any other consuming operation
    vcap.release()
Andean answered 6/7, 2018 at 9:38 Comment(4)
cv2.VideoCapture("rtsp://admin:@192.168.3.231") will new a object everytime which will very slowCulminant
Also using matplotlib for showing videos is a poor design choiceOverlying
this is the only method that has worked for me so far, but in my case it dropped from 7 fps to 5 fps so definitely not idealRastus
@Culminant True, but the new object is the reason it works - it gets rid of the bad buffer with the old object. Anyways, this hack worked for me but is indeed extremely slow.Erysipelas
S
4

An implementation of Hackaround 2 in Maarten's answer using Python. It starts a thread and keeps the latest frame from camera.read() as a class attribute. A similar strategy can be done in c++

import threading
import cv2

# Define the thread that will continuously pull frames from the camera
class CameraBufferCleanerThread(threading.Thread):
    def __init__(self, camera, name='camera-buffer-cleaner-thread'):
        self.camera = camera
        self.last_frame = None
        super(CameraBufferCleanerThread, self).__init__(name=name)
        self.start()

    def run(self):
        while True:
            ret, self.last_frame = self.camera.read()

# Start the camera
camera = cv2.VideoCapture(0)

# Start the cleaning thread
cam_cleaner = CameraBufferCleanerThread(camera)

# Use the frame whenever you want
while True:
    if cam_cleaner.last_frame is not None:
        cv2.imshow('The last frame', cam_cleaner.last_frame)
    cv2.waitKey(10)
Sabir answered 8/12, 2020 at 0:59 Comment(3)
Using threading.Lock() to synchronize access to last_frame would be safer.Attainder
For everyone who will try to use this code - remember that your code won't end if you press Ctr-C because there is still not finished thread so you need to implement some kind of clean up.Limn
doesn't work for me, just prints Expected boundary '--' not found, instead found a line of 21 bytesRastus
U
3

You can make sure that grabbing the frame took a bit of time. It is quite simple to code, though a bit unreliable; potentially, this code could lead to a deadlock.

#include <chrono>
using clock = std::chrono::high_resolution_clock;
using duration_float = std::chrono::duration_cast<std::chrono::duration<float>>;
// ...
while (1) {
    TimePoint time_start = clock::now();
    camera.grab();
    if (duration_float(clock::now() - time_start).count() * camera.get(cv::CAP_PROP_FPS) > 0.5) {
        break;
    }
}
camera.retrieve(dst_image);

The code uses C++11.

Ulterior answered 24/4, 2016 at 10:46 Comment(1)
According to docs The primary use of the function is in multi-camera environments, especially when the cameras do not have hardware synchronization. That is, you call VideoCapture::grab() for each camera and after that call the slower method VideoCapture::retrieve() to decode and get frame from each camera. This way the overhead on demosaicing or motion jpeg decompression etc. is eliminated and the retrieved frames from different cameras will be closer in time. That is not the fix. But I upvoted it.Zechariah
S
3

There is an option to drop old buffers if you use a GStreamer pipeline. appsink drop=true option "Drops old buffers when the buffer queue is filled". In my particular case, there is a delay (from time to time) during the live stream processing, so it's needed to get the latest frame each VideoCapture.read call.

#include <chrono>
#include <thread>

#include <opencv4/opencv2/highgui.hpp>

static constexpr const char * const WINDOW = "1";

void video_test() {
    // It doesn't work properly without `drop=true` option
    cv::VideoCapture video("v4l2src device=/dev/video0 ! videoconvert ! videoscale ! videorate ! video/x-raw,width=640 ! appsink drop=true", cv::CAP_GSTREAMER);

    if(!video.isOpened()) {
        return;
    }

    cv::namedWindow(
        WINDOW,
        cv::WINDOW_GUI_NORMAL | cv::WINDOW_NORMAL | cv::WINDOW_KEEPRATIO
    );
    cv::resizeWindow(WINDOW, 700, 700);

    cv::Mat frame;
    const std::chrono::seconds sec(1);
    while(true) {
        if(!video.read(frame)) {
            break;
        }
        std::this_thread::sleep_for(sec);
        cv::imshow(WINDOW, frame);
        cv::waitKey(1);
    }
}
Subjoinder answered 24/3, 2020 at 12:40 Comment(0)
A
-1

If you know the framerate of your camera you may use this information (i.e. 30 frames per second) to grab the frames until you got a lower frame rate. It works because if grab function become delayed (i.e. get more time to grab a frame than the standard frame rate), it means that you got every frame inside the buffer and opencv need to wait the next frame to come from camera.

while(True):
    prev_time=time.time()
    ref=vid.grab()
    if (time.time()-prev_time)>0.030:#something around 33 FPS
        break
ret,frame = vid.retrieve(ref)
Age answered 12/7, 2019 at 20:0 Comment(1)
prev_time=time.time() should be moved outside of the while loopUnited

© 2022 - 2024 — McMap. All rights reserved.