How to verify the correctness of calibration of a webcam?
Asked Answered
U

2

69

I am totally new to camera calibration techniques... I am using OpenCV chessboard technique... I am using a webcam from Quantum...

Here are my observations and steps..

  1. I have kept each chess square side = 3.5 cm. It is a 7 x 5 chessboard with 6 x 4 internal corners. I am taking total of 10 images in different views/poses at a distance of 1 to 1.5 m from the webcam.
  2. I am following the C code in Learning OpenCV by Bradski for the calibration. my code for calibration is

    cvCalibrateCamera2(object_points,image_points,point_counts,cvSize(640,480),intrinsic_matrix,distortion_coeffs,NULL,NULL,CV_CALIB_FIX_ASPECT_RATIO);
    
  3. Before calling this function I am making the first and 2nd element along the diagonal of the intrinsic matrix as one to keep the ratio of focal lengths constant and using CV_CALIB_FIX_ASPECT_RATIO

  4. With the change in distance of the chess board the fx and fy are changing with fx:fy almost equal to 1. there are cx and cy values in order of 200 to 400. the fx and fy are in the order of 300 - 700 when I change the distance.

  5. Presently I have put all the distortion coefficients to zero because I did not get good result including distortion coefficients. My original image looked handsome than the undistorted one!!

Am I doing the calibration correctly?. Should I use any other option than CV_CALIB_FIX_ASPECT_RATIO?. If yes, which one?

Uniformize answered 9/10, 2012 at 7:25 Comment(0)
M
153

Hmm, are you looking for "handsome" or "accurate"?

Camera calibration is one of the very few subjects in computer vision where accuracy can be directly quantified in physical terms, and verified by a physical experiment. And the usual lesson is that (a) your numbers are just as good as the effort (and money) you put into them, and (b) real accuracy (as opposed to imagined) is expensive, so you should figure out in advance what your application really requires in the way of precision.

If you look up the geometrical specs of even very cheap lens/sensor combinations (in the megapixel range and above), it becomes readily apparent that sub-sub-mm calibration accuracy is theoretically achievable within a table-top volume of space. Just work out (from the spec sheet of your camera's sensor) the solid angle spanned by one pixel - you'll be dazzled by the spatial resolution you have within reach of your wallet. However, actually achieving REPEATABLY something near that theoretical accuracy takes work.

Here are some recommendations (from personal experience) for getting a good calibration experience with home-grown equipment.

  1. If your method uses a flat target ("checkerboard" or similar), manufacture a good one. Choose a very flat backing (for the size you mention window glass 5 mm thick or more is excellent, though obviously fragile). Verify its flatness against another edge (or, better, a laser beam). Print the pattern on thick-stock paper that won't stretch too easily. Lay it after printing on the backing before gluing and verify that the square sides are indeed very nearly orthogonal. Cheap ink-jet or laser printers are not designed for rigorous geometrical accuracy, do not trust them blindly. Best practice is to use a professional print shop (even a Kinko's will do a much better job than most home printers). Then attach the pattern very carefully to the backing, using spray-on glue and slowly wiping with soft cloth to avoid bubbles and stretching. Wait for a day or longer for the glue to cure and the glue-paper stress to reach its long-term steady state. Finally measure the corner positions with a good caliper and a magnifier. You may get away with one single number for the "average" square size, but it must be an average of actual measurements, not of hopes-n-prayers. Best practice is to actually use a table of measured positions.

  2. Watch your temperature and humidity changes: paper adsorbs water from the air, the backing dilates and contracts. It is amazing how many articles you can find that report sub-millimeter calibration accuracies without quoting the environment conditions (or the target response to them). Needless to say, they are mostly crap. The lower temperature dilation coefficient of glass compared to common sheet metal is another reason for preferring the former as a backing.

  3. Needless to say, you must disable the auto-focus feature of your camera, if it has one: focusing physically moves one or more pieces of glass inside your lens, thus changing (slightly) the field of view and (usually by a lot) the lens distortion and the principal point.

  4. Place the camera on a stable mount that won't vibrate easily. Focus (and f-stop the lens, if it has an iris) as is needed for the application (not the calibration - the calibration procedure and target must be designed for the app's needs, not the other way around). Do not even think of touching camera or lens afterwards. If at all possible, avoid "complex" lenses - e.g. zoom lenses or very wide angle ones. For example, anamorphic lenses require models much more complex than stock OpenCV makes available.

  5. Take lots of measurements and pictures. You want hundreds of measurements (corners) per image, and tens of images. Where data is concerned, the more the merrier. A 10x10 checkerboard is the absolute minimum I would consider. I normally worked at 20x20.

  6. Span the calibration volume when taking pictures. Ideally you want your measurements to be uniformly distributed in the volume of space you will be working with. Most importantly, make sure to angle the target significantly with respect to the focal axis in some of the pictures - to calibrate the focal length you need to "see" some real perspective foreshortening. For best results use a repeatable mechanical jig to move the target. A good one is a one-axis turntable, which will give you an excellent prior model for the motion of the target.

  7. Minimize vibrations and associated motion blur when taking photos.

  8. Use good lighting. Really. It's amazing how often I see people realize late in the game that you need a generous supply of photons to calibrate a camera :-) Use diffuse ambient lighting, and bounce it off white cards on both sides of the field of view.

  9. Watch what your corner extraction code is doing. Draw the detected corner positions on top of the images (in Matlab or Octave, for example), and judge their quality. Removing outliers early using tight thresholds is better than trusting the robustifier in your bundle adjustment code.

  10. Constrain your model if you can. For example, don't try to estimate the principal point if you don't have a good reason to believe that your lens is significantly off-center w.r.t the image, just fix it at the image center on your first attempt. The principal point location is usually poorly observed, because it is inherently confused with the center of the nonlinear distortion and by the component parallel to the image plane of the target-to-camera's translation. Getting it right requires a carefully designed procedure that yields three or more independent vanishing points of the scene and a very good bracketing of the nonlinear distortion. Similarly, unless you have reason to suspect that the lens focal axis is really tilted w.r.t. the sensor plane, fix at zero the (1,2) component of the camera matrix. Generally speaking, use the simplest model that satisfies your measurements and your application needs (that's Ockam's razor for you).

  11. When you have a calibration solution from your optimizer with low enough RMS error (a few tenths of a pixel, typically, see also Josh's answer below), plot the XY pattern of the residual errors (predicted_xy - measured_xy for each corner in all images) and see if it's a round-ish cloud centered at (0, 0). "Clumps" of outliers or non-roundness of the cloud of residuals are screaming alarm bells that something is very wrong - likely outliers due to bad corner detection or matching, or an inappropriate lens distortion model.

  12. Take extra images to verify the accuracy of the solution - use them to verify that the lens distortion is actually removed, and that the planar homography predicted by the calibrated model actually matches the one recovered from the measured corners.

Midway answered 10/10, 2012 at 13:55 Comment(12)
I've re-asked the question here #18052837, you're most welcome to contribute.Commissar
While you answer is good, but it doesn't fully answer the original question (well, it sort of does, but it seems that you have enough knowledge to be a lot more specific on this :) given a camera and its calibration, HOW does one know that this calibration is correct?Commissar
Well, the most basic test, which is usually just enough, is to compare visually distorted and undistorted image. If the calibration was incorrect, applying cv::undistort() will produce an image with obvious and pretty bad distortions. Check aishack.in/2010/07/… and pay attention to the section on bad calibration with an example.Danforth
LOL - yes, that is the most basic test, and no, it is normally NOT enough. For example, with your smartphone cam looking at a tabletop scene, a half-pixel misalignment could easily map to several inches worth of error on the table. If you can visually estimate half a pixel, your glasses are better than mine :-)Midway
@FrancescoCallari What square size do you usually use and recommend? What backing do you recommend for big targets (90x80cm)? Aluminum di-bond?Rew
Target size depends entirely on camera FOV and the depth of the expected work area. Material - you have to work it out from the camera spec and your application needs. Example: 1/2 pixel of error in a given image area in the image around the image centre corresponds to a certain deviation in depth in mm, then you can work out if a certain material will bend by that much (under its own weight, the stress of its mount if any, temperature change, etc.). Generally speaking, go for the most rigid setup you can cope with, and be creativeMidway
@FrancescoCallari Thanks for this answer, it is really helpful! Still, I have a question regarding the target. As you said, it completly depends on the camera spec and the expected work area. I am not really sure how to choose the proper target size (num. of inner corners, square size). In my particular case I want to stereo calibrate two Canon EOS 7D with EFS 17-55 objectives. The cameras should capture as much as possible within a room (approx. 9 x 4m). Could you give me a suggestion (maybe also on the camera settings)? I am unfortunately a beginner in phtography or camera calibration.Pelham
The number of squares must balance (at least) two requirements: (1) you want lots of measurements at each image. (2) you must be able to correctly segment the squares regardless of target orientation and distance within the expected work area. Start with a reasonable number for the first, say, 20x20, then work out the size from FOV and distance. Pay attention to the expected depth of field, which may constrain the work area as well.Midway
For what you mentioned in point 9, is there a way I can setup judging for detected corners so that I do not need to manually see each individual image. I have tried re-calibrating by removing images with bad RMS error from the first calibration, but it is really slow.Fiume
What you are asking is really "How can I automatically detect outliers?". There is a plethora of methods and a vast literature on the subject - crossvalidation may be the most common. If your lens has moderate distortion, you could try to see how well a corner is predicted from the linear interpolation of its neighbors, or from the intersection of lines fitted to the edges of the squares it belongs to.Midway
The linked page from @rbaleksandar's comment has moved hereAurochs
Might be a really stupid question but is using a display such as IPAD, iphone or even some e-ink(though eink has bad ppi) an alternative to printing the checkerboard? THe difference in PPI/DPI is not that great with modern oled or miniled and you won't have to worry about print quality and how straight the checkerboard is.Fabrienne
W
48

This is a rather late answer, but for people coming to this from Google:

The correct way to check calibration accuracy is to use the reprojection error provided by OpenCV. I'm not sure why this wasn't mentioned anywhere in the answer or comments, you don't need to calculate this by hand - it's the return value of calibrateCamera. In Python it's the first return value (followed by the camera matrix, etc).

The reprojection error is the RMS error between where the points would be projected using the intrinsic coefficients and where they are in the real image. Typically you should expect an RMS error of less than 0.5px - I can routinely get around 0.1px with machine vision cameras. The reprojection error is used in many computer vision papers, there isn't a significantly easier or more accurate way to determine how good your calibration is.

Unless you have a stereo system, you can only work out where something is in 3D space up to a ray, rather than a point. However, as one can work out the pose of each planar calibration image, it's possible to work out where each chessboard corner should fall on the image sensor. The calibration process (more or less) attempts to work out where these rays fall and minimises the error over all the different calibration images. In Zhang's original paper, and subsequent evaluations, around 10-15 images seems to be sufficient; at this point the error doesn't decrease significantly with the addition of more images.

Other software packages like Matlab will give you error estimates for each individual intrinsic, e.g. focal length, centre of projection. I've been unable to make OpenCV spit out that information, but maybe it's in there somewhere. Camera calibration is now native in Matlab 2014a, but you can still get hold of the camera calibration toolbox which is extremely popular with computer vision users.

http://www.vision.caltech.edu/bouguetj/calib_doc/

Visual inspection is necessary, but not sufficient when dealing with your results. The simplest thing to look for is that straight lines in the world become straight in your undistorted images. Beyond that, it's impossible to really be sure if your cameras are calibrated well just by looking at the output images.

The routine provided by Francesco is good, follow that. I use a shelf board as my plane, with the pattern printed on poster paper. Make sure the images are well exposed - avoid specular reflection! I use a standard 8x6 pattern, I've tried denser patterns but I haven't seen such an improvement in accuracy that it makes a difference.

I think this answer should be sufficient for most people wanting to calibrate a camera - realistically unless you're trying to calibrate something exotic like a Fisheye or you're doing it for educational reasons, OpenCV/Matlab is all you need. Zhang's method is considered good enough that virtually everyone in computer vision research uses it, and most of them either use Bouguet's toolbox or OpenCV.

Weatherman answered 9/6, 2014 at 11:33 Comment(1)
A low RMS error is a necessary, but not sufficient condition for a good calibration accuracy, as it can hide bias. That is why I also recommend looking at the XY plot of the residual errors. Thanks for the endorsement!Midway

© 2022 - 2024 — McMap. All rights reserved.