Capturing camera image with v4l2 very slow
Asked Answered
C

1

7

I've been working on using v4l2 directly to grab a camera image in OpenCV. This is working very nice; this way I can grab an image in YUYV format and with a high resolution (understanding that the framerate will drop). I couldn't get this done with the OpenCV implementation. Functionally it's working great, but the performance could be much better. Since this is my first time using v4l2 directly, it's still a bit vague to me. I've been timing all relevant parts and saw that the v4l2 select method is taking a bit more than a second. When I lower the timeinterval, the select method takes less time, but than the dequeueing takes much longer (also that second). In other funtions the camera is initialised, so setting the right format etc. I understand that the framerate will be low, with no compression and high resolution, but this is extreamly low.

Below is the capture image function. I skipped the code in which the buffer is transformed to a Mat (YUYV -> RGB), because I think that it's not relevant for now.

Does anybody know how to make v4l2 capture images much faster? Maybe there are parts that I should not perform each frame grab?

Thank you!

Mat Camera::capture_image() {
Mat returnframe(10, 10, CV_8UC3);
struct v4l2_buffer buf = {0};
buf.type = V4L2_BUF_TYPE_VIDEO_CAPTURE;
buf.memory = V4L2_MEMORY_MMAP;
buf.index = 0;
if (-1 == xioctl(fd, VIDIOC_QBUF, &buf)) {
    perror("Query Buffer");
    return returnframe;
}

if (-1 == xioctl(fd, VIDIOC_STREAMON, &buf.type)) {
    perror("Start Capture");
    return returnframe;
}

fd_set fds;
FD_ZERO(&fds);
FD_SET(fd, &fds);
struct timeval tv = {0};
tv.tv_sec = 2;
int r = select(fd + 1, &fds, NULL, NULL, &tv);
if (-1 == r) {
    perror("Waiting for Frame");
    return returnframe;
}

if (-1 == xioctl(fd, VIDIOC_DQBUF, &buf)) {
    perror("Retrieving Frame");
    return returnframe;
}

//code here for converting to Mat

if (-1 == xioctl(fd, VIDIOC_STREAMOFF, &buf.type)) {
    perror("Stop Capture");
    return returnframe;
}

//copy Mat and free bigbuffer, to avoid memory leak
Mat returnImg = dispimg.clone();
free(bigbuffer);
return returnImg;
}
Catamite answered 26/3, 2015 at 8:0 Comment(1)
you are starting capturing for each frame? that looks like a lot of overheardBarrister
A
7

it seems that for each frame you are calling VIDIOC_STREAMON and VIDIOC_STREAMOFF; this adds a lot of overhead (it's almost like re-starting your application for each frame)

the proper way would be:

  • open device (called only once): at the beginning of your capture-session (e.g. the program start), setup your video-device to start streaming by calling VIDIOC_STREAMON

  • capture frame (called multiple times): for each frame you want to capture, request the frame by only calling DQBUF/QBUF (this is pretty fast, as the device will continuously stream data into the buffer queue); you will still need to call select in order to know when a new frame is available.

  • close device (called only once): once you are done, stop streaming by calling VIDIOC_STREAMOFF

Awestricken answered 26/3, 2015 at 20:35 Comment(4)
Thanks for your reply! So I do not use the select function any more? And do you mean I have to choose between DQBUF and QBUF or do you mean I should use both?Catamite
no, you still use select (to know when a frame is ready), and DQBUF (dequeue buffer) and QBUF (queue buffer) belong together (you use both); the point is, that you must not call STREAMON in the capture frame function, but in the open device function.Barrister
Ok great. This makes a big difference! Does the order matter? Thank you so much!Catamite
Hi, I use 30 see3cam 130 for an industrial solution. I get thebig latencyRussel

© 2022 - 2024 — McMap. All rights reserved.