How to synchronize two USB cameras to use them as Stereo Camera?
Asked Answered
M

5

14

I am trying to implement object detection using Stereo Vision in OPENCV. I am using two Logitech-C310 camera for this. But I am not getting synchronized frames with two cameras. Time difference between two cameras frame capture is also not same.

  • How synchronization can be done ?

  • In Stereo Cameras like Bumblebee, Minoru etc. do we need to synchronize ?


Thanks for your response.

I am trying to implement person tracking with a moving robotic platform. I am using cvQueryFrame(capture) to capture each frame from both cameras one by one in a loop. Here is the part of code that I am using:

CvCapture* capture_1 = cvCreateCameraCapture(0);
Cvcapture* capture_2 = cvCreateCameraCapture(1);
for(i=1;i<=20;i++)
{
 frame_1= cvQueryFrame(capture_1);
 frame_2= cvQueryFrame(Capture_2);

//processing of frames//

}

even if someone moves with moderate speed in front of camera the difference between frame_1 and frame_2 is visible.


Is this delay because of cvQueryFrame(capture)?

Melisandra answered 10/2, 2014 at 7:17 Comment(2)
I am trying to solve the same problem - my question is here: #25162420 I also get variable frame offset with an average of about 150ms.Coptic
Bumblebee is synced pretty decent.Genvieve
S
7

TL;DR

See my last code snippet "A simple workaround". That's how I did it.


Although I didn't work with CvCapture but with VideoCapture and not C++ but Python, my solution might still apply to your problem. I also wanted to capture synchronized stereo images with OpenCV.

A naive attempt might be:

vidStreamL = cv2.VideoCapture(0)
vidStreamR = cv2.VideoCapture(2)

_, imgL = vidStreamL.read()
_, imgR = vidStreamR.read()

vidStreamL.release()
vidStreamR.release()

Problem 1: The second camera is only triggered after the first image is captured and retrieved from the camera, which takes some time.

A better way is to grab the frame first (tell the cameras to pin down the current frame) and to retrieve it afterwards:

vidStreamL = cv2.VideoCapture(0)
vidStreamR = cv2.VideoCapture(2)

vidStreamL.grab()
vidStreamR.grab()
_, imgL = vidStreamL.retrieve()
_, imgR = vidStreamR.retrieve()

vidStreamL.release()
vidStreamR.release()

Problem 2: I still measured differences of about 200 ms (filming a watch with milliseconds). The reason is an internal capture buffer, described here. Unfortunately, it cannot always be easily deactivated (at least in my OpenCV version).

A simple workaround is to grab frames multiple times until the capture buffer is empty before retrieving the actual images:

vidStreamL = cv2.VideoCapture(0)
vidStreamR = cv2.VideoCapture(2)

for i in range(10):
    vidStreamL.grab()
    vidStreamR.grab()
_, imgL = vidStreamL.retrieve()
_, imgR = vidStreamR.retrieve()

vidStreamL.release()
vidStreamR.release()

This solution works well for my case. I couldn't see any measurable difference (< 10 ms).

(Problem 3:) Technically, the cameras are still not synchronized. Normal USB webcams won't be able to do that. But more professional cameras often have an external trigger to actually control when to start capturing a frame. This is beyond the scope of this post.

Seepage answered 1/8, 2019 at 12:47 Comment(2)
What affect does grabbing the frames multiple times have on the FPS? Let's say both cameras can do 720p @ 60 fps, does your code cut this down to 6 fps? Or have I misunderstood the setup?Inoculation
@SamHammamy I'm not 100% sure, but your assumption sounds reasonable. I didn't need high frame rates but single synchronised images. In your case, you might need to experiment with different numbers of grabs. I ended up using 5.Seepage
P
1

This is not a coding solution and requires hardware changes.

Trying to take one camera as the reference and read the other camera into memory and read it out with sufficient delay to be synchronous with the first, happened to be a lot of trouble and crashes when there is too much movement.

As a result, I bought an HDMI switch with PIP and selected the left and right frame to be side by side. This produces a live stereo image and doesn't crash every 30 seconds. The single output stream can be manipulated in open CV to resize it or stretch or crop it.

Stage 2 of this hardware-based plan is to try and find something that will do the job but be physically smaller. I have found this in the analogue domain with a drone 3d camera kit. Drones use analogue to avoid latency as that could result in a crash if the operator is flying first person view using goggles.

Sorry for the inconvenience as it's a hardware solution but I am really fed up of trying to get a Raspberry Pi to do something that should be easy but isn't. Posted it in case it helps some other person to not waste too much time and is capable of looking for alternate solutions.

Persian answered 5/11, 2022 at 22:2 Comment(0)
M
0

As far as I can remember, the two cameras form the Minoru are not synchronized. The exact communication pattern that you need to implement will depend on the driver that is used to access the cameras, so will be platform dependant and you need to provide more information to get accurate answers.

Now, another question is: do you really need synchronization? I understand that sterovision is obviously simpler when the cams are perfectly in sync, but at something like 30 fps (or even 15), an object needs to move really fast before some distortion occur.

Monochord answered 10/2, 2014 at 8:7 Comment(1)
Moving objects or camera translations is probably no biggie at 30 fps. But camera rotations can easily become a problem if the cameras are not synced. This all depends on the accuracy you need, of course.Acerbity
V
0

Agree with other comments that to achieve near perfect synchronization requires hardware support, but there were a few valid points in this thread that were interesting to me for my project and I wanted to see how well two webcams could be synchronized through just the OpenCV software interface.

To do this experiment, I used two onn surf webcams connected to a ThinkPad P1 with Intel i7 processor. I then used my smartphone timer app and captured an image of the current time when the images were taken on both webcams and compared the difference in recorded time.

Using the following code:

VideoCapture cap1;
VideoCapture cap2;

Mat frame1;
cap1 >> frame1;
Mat frame2;
cap2 >> frame2;

imshow("Webcam1 frame", frame1);
imshow("Webcam1 frame", frame2);

I saw timing difference range from 0 to 130 milliseconds.

Using the following code as suggested by @Falko but modified for C++:

cap1.grab();
cap2.grab();
Mat frame1;
cap1.retrieve(frame1);
Mat frame2;
cap2.retrieve(frame2);

imshow("Webcam1 frame", frame1);
imshow("Webcam1 frame", frame2); 

I saw timing difference range from 0 to 60 milliseconds.

So depending on what level of synchronization you're trying to achieve the grab() and retrieve() methods may be sufficient.

Vise answered 14/11, 2023 at 3:35 Comment(1)
need threading to handle two camera...Burke
B
0

You need to use threading to get frames synchronously from two cameras at the same time. Otherwise you will not face lagging in output and also frame loss... Please check the following c++ code that might help in this case:

#include <opencv2/opencv.hpp>
#include <thread>

using namespace cv;
using namespace std;

// Function to capture frames from Camera 1
void captureCamera1(VideoCapture& cap1) {
    Mat frame1;
    while (true) {
        cap1 >> frame1;
        if (frame1.empty()) {
            cerr << "Failed to capture frame from Camera 1\n";
            break;
        }
        // Process or display frame1 as needed
        imshow("Camera 1", frame1);
        if (waitKey(30) == 27) // Press Esc to exit
            break;
    }
}

// Function to capture frames from Camera 2
void captureCamera2(VideoCapture& cap2) {
    Mat frame2;
    while (true) {
        cap2 >> frame2;
        if (frame2.empty()) {
            cerr << "Failed to capture frame from Camera 2\n";
            break;
        }
        // Process or display frame2 as needed
        imshow("Camera 2", frame2);
        if (waitKey(30) == 27) // Press Esc to exit
            break;
    }
}

int main() {
    // Open two video captures for the cameras
    VideoCapture cap1(0); // Adjust the camera index as needed
    VideoCapture cap2(1); // Adjust the camera index as needed

    // Check if the cameras opened successfully
    if (!cap1.isOpened() || !cap2.isOpened()) {
        cerr << "Error opening cameras\n";
        return -1;
    }

    // Create threads for capturing frames
    thread thread1(captureCamera1, ref(cap1));
    thread thread2(captureCamera2, ref(cap2));

    // Wait for threads to finish
    thread1.join();
    thread2.join();

    // Release video captures
    cap1.release();
    cap2.release();

    destroyAllWindows();

    return 0;
}
Burke answered 20/11, 2023 at 6:25 Comment(0)

© 2022 - 2025 — McMap. All rights reserved.