Experiment on displaying a Bitmap retrieved from a camera on a Picturebox
Asked Answered
D

2

8

In my code I retrieve frames from a camera with a pointer to an unmanaged object, make some calculations on it and then I make it visualized on a picturebox control.
Before I go further in this application with all the details, I want to be sure that the base code for this process is good. In particular I would like to:
- keep execution time minimal and avoid unnecessary operations, such as copying more images than necessary. I want to keep only essential operations
- understand if a delay in the calculation process on every frame could have detrimental effects on the way images are shown (i.e. if it is not printed what I expect) or some image is skipped
- prevent more serious errors, such as ones due to memory or thread management, or to image display.
For this purpose, I set up a few experimental lines of code (below), but I’m not able to explain the results of what I found. If you have the executables of OpenCv you can make a try by yourself.

using System;
using System.Drawing;
using System.Drawing.Imaging;
using System.Windows.Forms;
using System.Runtime.InteropServices;
using System.Threading;

public partial class FormX : Form
{
private delegate void setImageCallback();
Bitmap _bmp;
Bitmap _bmp_draw;
bool _exit;
double _x;
IntPtr _ImgBuffer;

bool buffercopy;
bool copyBitmap;
bool refresh;

public FormX()
{
    InitializeComponent();
    _x = 10.1;

    // set experimemental parameters
    buffercopy = false;
    copyBitmap = false;
    refresh = true;
}

private void buttonStart_Click(object sender, EventArgs e)
{
    Thread camThread = new Thread(new ThreadStart(Cycle));
    camThread.Start();
}

private void buttonStop_Click(object sender, EventArgs e)
{
    _exit = true;
}

private void Cycle()
{
    _ImgBuffer = IntPtr.Zero;
    _exit = false;

    IntPtr vcap = cvCreateCameraCapture(0);
    while (!_exit)
    {
        IntPtr frame = cvQueryFrame(vcap);

        if (buffercopy)
        {
            UnmanageCopy(frame);
            _bmp = SharedBitmap(_ImgBuffer);
        }
        else
        { _bmp = SharedBitmap(frame); }

        // make calculations
        int N = 1000000; /*1000000*/
        for (int i = 0; i < N; i++)
            _x = Math.Sin(0.999999 * _x);

        ShowFrame();
    }

    cvReleaseImage(ref _ImgBuffer);
    cvReleaseCapture(ref vcap);
}


private void ShowFrame()
{
    if (pbCam.InvokeRequired)
    {
        this.Invoke(new setImageCallback(ShowFrame));
    }
    else
    {
        Pen RectangleDtPen = new Pen(Color.Azure, 3);

        if (copyBitmap)
        {
            if (_bmp_draw != null) _bmp_draw.Dispose();
            //_bmp_draw = new Bitmap(_bmp); // deep copy
            _bmp_draw = _bmp.Clone(new Rectangle(0, 0, _bmp.Width, _bmp.Height), _bmp.PixelFormat);
        }
        else
        {
            _bmp_draw = _bmp;  // add reference to the same object
        }

        Graphics g = Graphics.FromImage(_bmp_draw);
        String drawString = _x.ToString();
        Font drawFont = new Font("Arial", 56);
        SolidBrush drawBrush = new SolidBrush(Color.Red);
        PointF drawPoint = new PointF(10.0F, 10.0F);
        g.DrawString(drawString, drawFont, drawBrush, drawPoint);
        drawPoint = new PointF(10.0F, 300.0F);
        g.DrawString(drawString, drawFont, drawBrush, drawPoint);
        g.DrawRectangle(RectangleDtPen, 12, 12, 200, 400);
        g.Dispose();

        pbCam.Image = _bmp_draw;
        if (refresh) pbCam.Refresh();
    }
}

public void UnmanageCopy(IntPtr f)
{
    if (_ImgBuffer == IntPtr.Zero)
        _ImgBuffer = cvCloneImage(f);
    else
        cvCopy(f, _ImgBuffer, IntPtr.Zero);
}

// only works with 3 channel images from camera! (to keep code minimal)
public Bitmap SharedBitmap(IntPtr ipl)
{
    // gets unmanaged data from pointer to IplImage:
    IntPtr scan0;
    int step;
    Size size;
    OpenCvCall.cvGetRawData(ipl, out scan0, out step, out size);
    return new Bitmap(size.Width, size.Height, step, PixelFormat.Format24bppRgb, scan0);
}

// based on older version of OpenCv. Change dll name if different
[DllImport( "opencv_highgui246", CallingConvention = CallingConvention.Cdecl)]
public static extern IntPtr cvCreateCameraCapture(int index);

[DllImport("opencv_highgui246", CallingConvention = CallingConvention.Cdecl)]
public static extern void cvReleaseCapture(ref IntPtr capture);

[DllImport("opencv_highgui246", CallingConvention = CallingConvention.Cdecl)]
public static extern IntPtr cvQueryFrame(IntPtr capture);

[DllImport("opencv_core246", CallingConvention = CallingConvention.Cdecl)]
public static extern void cvGetRawData(IntPtr arr, out IntPtr data, out int step, out Size roiSize);

[DllImport("opencv_core246", CallingConvention = CallingConvention.Cdecl)]
public static extern void cvCopy(IntPtr src, IntPtr dst, IntPtr mask);

[DllImport("opencv_core246", CallingConvention = CallingConvention.Cdecl)]
public static extern IntPtr cvCloneImage(IntPtr src);

[DllImport("opencv_core246", CallingConvention = CallingConvention.Cdecl)]
public static extern void cvReleaseImage(ref IntPtr image);
}

results [dual core 2 Duo T6600 2.2 GHz]:

A. buffercopy = false; copyBitmap = false; refresh = false;
This is the simpler configuration. Each frame is retrieved in turn, operations are made (in the reality they are based on the same frame, here just calculations), then the result of the calculations is printed on top of the image and finally it is displayed on a picturebox.
OpenCv documentation says:

OpenCV 1.x functions cvRetrieveFrame and cv.RetrieveFrame return image stored inside the video capturing structure. It is not allowed to modify or release the image! You can copy the frame using cvCloneImage() and then do whatever you want with the copy.

But this doesn’t prevent us from doing experiments.
If the calculation are not intense (low number of iterations, N), everything is just ok and the fact that we manipulate the image buffer own by the unmanaged frame retriever doesn’t pose a problem here.
The reason is that probably they advise to leave untouched the buffer, in case people would modify its structure (not its values) or do operations asynchronously without realizing it. Now we retrieve frames and modify their content in turn.
If N is increased (N=1000000 or more), when the number of frames per second is not high, for example with artificial light and low exposure, everything seems ok, but after a while the video is lagged and the graphics impressed on it are blinking. With a higher frame rate the blinking appears from the beginning, even when the video is still fluid.
Is this because the mechanism of displaying images on the control (or refreshing or whatever else) is somehow asynchronous and when the picturebox is fetching its buffer of data it is modified in the meanwhile by the camera, deleting the graphics?
Or is there some other reason?
Why is the image lagged in that way, i.e. I would expect that the delay due to calculations only had the effect of skipping the frames received by the camera when the calculation are not done yet, and de facto only reducing the frame rate; or alternatively that all frames are received and the delay due to calculations brings the system to process images gotten minutes before, because the queue of images to process rises over time.
Instead, the observed behavior seems hybrid between the two: there is a delay of a few seconds, but this seems not increased much as the capturing process goes on.

B. buffercopy = true; copyBitmap = false; refresh = false;
Here I make a deep copy of the buffer into a second buffer, following the advice of the OpenCv documentation.
Nothing changes. The second buffer doesn’t change its address in memory during the run.

C. buffercopy = false; copyBitmap = true; refresh = false;
Now the (deep) copy of the bitmap is made allocating every time a new space in memory.
The blinking effect has gone, but the lagging keep arising after a certain time.

D. buffercopy = false; copyBitmap = false; refresh = true;
As before.

Please help me explain these results!

Documentary answered 26/4, 2015 at 17:45 Comment(3)
It's really not clear what question you are asking here. Please remember that SO is a Q&A forum. IIUC, one thing you appear to be asking is along the lines of "the documentation says not to do this, but if I do it sometimes it works and sometimes it doesnt". Well - don't do it!Karachi
No. It seems that you have just read half of the text. Anyway, if you go to the last line it asks an explanation of the results.Documentary
I am afraid that you are asking a question you can only answer because you are the one who designed the experiment. I suggest that you plan another experiment so you can understand its outcome. Also restructuring your question would be nice as it's hard to follow your logic or questions.Pericranium
W
2

If I may be so frank, it is a bit tedious to understand all the details of your questions, but let me make a few points to help you analyse your results.

In case A, you say you perform calculations directly on the buffer. The documentation says you shouldn't do this, so if you do, you can expect undefined results. OpenCV assumes you won't touch it, so it might do stuff like suddenly delete that part of memory, let some other app process it, etc. It might look like it works, but you can never know for sure, so don't do it *slaps your wrist* In particular, if your processing takes a long time, the camera might overwrite the buffer while you're in the middle of processing it.

The way you should do it is to copy the buffer before doing anything. This will give you a piece of memory that is yours to do with whatever you wish. You can create a Bitmap that refers to this memory, and manually free the memory when you no longer need it.

If your processing rate (frames processed per second) is less than the number of frames captured per second by the camera, you have to expect some frames will be dropped. If you want to show a live view of the processed images, it will lag and there's no simple way around it. If it is vital that your application processes a fluid video (e.g. this might be necessary if you're tracking an object), then consider storing the video to disk so you don't have to process in real-time. You can also consider multithreading to process several frames at once, but the live view would have a latency.

By the way, is there any particular reason why you're not using EmguCV? It has abstractions for the camera and a system that raises an event whenever the camera has captured a new frame. This way, you don't need to continuously call cvQueryFrame on a background thread.

Wandering answered 8/5, 2015 at 11:48 Comment(6)
I’ve always respected the rule “the documentation says it and so I will do it”, but I think it is useful to understand why it states that. For sure, if you respect it, you will not have problems, but if you don’t understand exactly the reason you could encounter problems elsewhere. For example I have found many implementations (even from authoritative sites) that make its own copy of the frame inside a buffer image. I suspect that they have simply shifted part of the problems elsewhere: what if that buffer it’s being accessed while another part of the program is writing upon it?Documentary
Probably this doesn’t usually happen in many implementations for the same reason why it doesn’t pose a problem for the original buffer: writing and reading operations are sequential. Not sure about it, but I was just looking for confirmations.Documentary
Probably you won't have another application accessing it at the same time, but it is conceiveable that if you do a lot of slow processing on one image, then the camera will replace the image data before you're done. If every application creates a copy, then there is no fighting over it for sure. It is also conceiveable that the camera will delete or replace the data in the buffer. I say "conceiveable", implying that we don't know for sure, but the documentation is explicit enoughWandering
...so from this perspective, we can never know exactly for sure why the documentation says this thing. But we understand that it CAN be broken by OpenCV somehow, so we should respect it. I totally agree it's useful to understand why whenever it's possible, though. Also, it might be the case that operations are sequential, but it is still an issue if another application does something like, say, apply a Sobel filter to your image without your knowledgeWandering
Thank you for your response. The discussion is focusing on the opportunity to leave alone the original buffer. In fact I just tried to do so in order to better understand why the same behavior is exact observed when I make a copy in my own buffer (case B). And why everything is ok when either I make I deep copy of the image or I make a shallow copy as before but refresh the picturebox.Documentary
As to using EMGU, I don’t think that using a system that raises an event every time a frame is grabbed would be helpful. The problem of lagging won’t disappear, thread safety has to be implemented (with its overheads and risks) and the continuous calling of cvQueryFrame still happens under the hood of the wrapper. Keep in mind that the above code is just an interop translation of all the c++ direct OpenCv implementations of the process.Documentary
M
0

I think that you still have a problem with your UnmanageCopy method in that you only clone the image the first time this is called and you subsequently copy it. I believe that you need to do a cvCloneImage(f) every time as copy performs only a shallow copy, not a deep copy as you seem to think.

Macle answered 6/5, 2015 at 7:16 Comment(1)
I wrote "The second buffer doesn’t change its address in memory during the run". So in case B, the deep copy in made in a buffer.Documentary

© 2022 - 2024 — McMap. All rights reserved.