Camera calibration with single image? It seems to work, but am I missing something?
Asked Answered
G

3

9

I have to do camera calibration. I understand the general concept and I have it working, but in many guides it says to use many images or at the very least two with different orientation. Why exactly is this necessary? I seem to be getting reasonably good results with a single image of 14x14 points:

Calibration image

Restored calibration image I find the points with cv::findCirclesGrid and use cv::calibrateCamera to find the extrinsic and intrinsic parameters. Intrinsic guess is set to false. Principal point and aspect ratio are not fixed while tangential distortion is fixed to zero.

I then use cv::getOptimalNewCameraMatrix, cv::initUndistortRectifyMap and cv::remap to restore the image.

It seems to me the result is pretty good, but am I missing something? Is it actually wrong and just waiting to cause problems for me later?

Also before you ask why I don't just use multiple images to be sure; the software I am writing will be used with a semi-fixed camera stand to calibrate several cameras one at a time. So first off the stand would need to be modified in order to position the pattern at an angle or off centre, as currently it can only be moved closer or further away. Secondly the process should not be unnecessarily slowed down by having to capture more images.

Edit: To Micka asking "what happens if your viewing angle isnt 90° on the pattern? Can you try to rotate the pattern away from the camera?". I get a somewhat similar result, although it finds less distortion. From looking at the borders with a ruler it seems that the calibration from 90° is better, but it is really hard to tell.

Goatfish answered 7/1, 2015 at 14:3 Comment(2)
what happens if your viewing angle isnt 90° on the pattern? Can you try to rotate the pattern away from the camera?Wickham
@Goatfish Are you able to add your code?Oldfangled
A
4

Having more patterns in different orientations is necessary to avoid the situation where the instrinsic parameters are very inaccurate but the pixel reprojection error of the undistortion is still low because different errors compensate.

To illustrate this point: if you only have one image taken at 90 degree viewing angle, then a change in horizontal focal length can be poorly distinguished from viewing the pattern a little bit from the side. The only clue that sets the two parameters apart is the tapering of the lines, but that measurement is very noisy. Hence you need multiple views at significant angles to separate this aspect of the pose from the intrinsic parameters.

If you know your image is viewed at 90 degrees, you can use this to your advantage but it requires modification of the opencv algorithm. If you are certain that all images will be captured from the same pose as your calibration image, then it does not really matter as the undistortion will be good even if the individual calibration parameters are inaccurate but compensating (i.e. they compensate well for this specific pose, but poorly for other poses).

Anuska answered 18/3, 2019 at 10:31 Comment(0)
C
3

As stated here, the circle pattern (in theory) gets along quite well with only a single image. The reason that you would need multiple images is the noise present in the input data.

My suggestions would be to compare the results of different input images. If the error is low, you will probably be able to get away with one sample.

Chromatograph answered 7/1, 2015 at 14:11 Comment(4)
can you cite the part where it says that circle pattern works in theory with a single image, please?Wickham
"To solve the equation you need at least a predetermined number of pattern snapshots to form a well-posed equation system. This number is higher for the chessboard pattern and less for the circle ones. For example, in theory the chessboard pattern requires at least two snapshots." Although it is not explicitly stated, I understand that the circle pattern needs less than two images. This may still be more than one though.Chromatograph
Thank you, I have previously read that page but missed your interpretation of it. I would love for that to be the case, and I see how the text can be interpreted that way. However both patterns are used to find an array of co-planar points and from there the algorithms are the same. While the square pattern may be less resistant to noise I don't understand how that alone would explain the need for two images?Goatfish
I'm sure there's some mathematical explanation derived from the formulas given on that page or somewhere else, but I can't really help you there. You could try to solve the equation system, or look at it from a practical point of view and analyze the error resulting from different input images. The latter would be my preferred solution.Chromatograph
R
0

i check the paper , which opencv method using...that is zhangzhengyou's method. that is n=1 , you can only get the focus . in the paper , he says "

If n 3, we will have in general a unique solution b defined up to a scale factor. If n à 2, we can impose the skewless constraint à 0, i.e., â0; 1; 0; 0; 0; 0äb à 0, which is added as an additional equation to (9). (If n à 1, we can only solve two camera intrinsic parameters, e.g., and , assuming u0 and v0 are known (e.g., at the image center) and à 0, and that is indeed what we did in [19] for head pose determination based on the fact that eyes and mouth are reasonably coplanar. In fact, Tsai [23] already mentions that focal length from one plane is possible, but incorrectly says that aspect ratio is not.) "

Rockhampton answered 12/2, 2022 at 9:11 Comment(0)

© 2022 - 2025 — McMap. All rights reserved.