/
A Fourstep Camera Calibration Procedure with Implicit Image Correction Janne Heikkil and A Fourstep Camera Calibration Procedure with Implicit Image Correction Janne Heikkil and

A Fourstep Camera Calibration Procedure with Implicit Image Correction Janne Heikkil and - PDF document

luanne-stotts
luanne-stotts . @luanne-stotts
Follow
522 views
Uploaded On 2014-12-12

A Fourstep Camera Calibration Procedure with Implicit Image Correction Janne Heikkil and - PPT Presentation

oulufi ollieeoulufi Abstract In geometrical camera calibration the objective is to deter mine a set of camera parameters that describe the map ping between 3D reference coordinates and 2D image coordinates Various methods for camera calibration can b ID: 22715

oulufi ollieeoulufi Abstract geometrical

Share:

Link:

Embed:

Download Presentation from below link

Download Pdf The PPT/PDF document "A Fourstep Camera Calibration Procedure ..." is the property of its rightful owner. Permission is granted to download and print the materials on this web site for personal, non-commercial use only, and to display it on your personal computer provided you do not modify the materials and that you retain all copyright notices contained in the materials. By downloading content from our website, you accept the terms of this agreement.


Presentation Transcript

A Four-step Camera Calibration Procedure with Implicit Image CorrectionUniversity of OuluIn geometrical camera calibration the objective is to deter-mine a set of camera parameters that describe the map-ping between 3-D reference coordinates and 2-D imagecoordinates. Various methods for camera calibration canbe found from the literature. However, surprisingly littleattention has been paid to the whole calibration procedure,i.e., control point extraction from images, model Þtting,image correction, and errors originating in these stages.The main interest has been in model Þtting, although theother stages are also important. In this paper we present afour-step calibration procedure that is an extension to thetwo-step method. There is an additional step to compen-sate for distortion caused by circular features, and a stepfor correcting the distorted image coordinates. The imagecorrection is performed with an empirical inverse modelthat accurately compensates for radial and tangential dis-tortions. Finally, a linear method for solving the parame-ters of the inverse model is presented.1. IntroductionCamera calibration in the context of three-dimensionalcamera frame relative to a certain world coordinate system(extrinsic parameters) [8]. In many cases, the overall per-on the accuracy of the camera calibration.Several methods for geometric camera calibration areeral methods for geometric camera calibration areoriginates from the Þeld of photogrammetry solves theslowness and computational burden of this technique,closed-form solutions have been also suggested (e.g.[8],[1],[5]). However, these methods are based on certainsimpliÞcations in the camera model, and therefore, they donot provide as good results as nonlinear minimization.[5],[10]). In these two-step methods, the initial parametervalues are computed linearly and the Þnal values arethe camera model is based on physical parameters, likefocal length and principal point, are called explicit meth-ods. In most cases, the values for these parameters are inthemselves useless, because only the relationship betweenknown tie-points (e.g. [9]).In this paper, we present a four-step calibration proce-dure that is an extension to the two-step procedure. Sectionusing a direct linear transformation (DLT). Section 2.2.tions are larger than one pixel in size. In Section 2.3., weonly consider circular features, but similar analysis can beerror sources in feature extraction, like changes in the illu-mination, but they are discussed in [4]. The fourth step ofthe procedure is presented in Section 3. and it solves theby using a new implicit model that interpolates the correctderived in previous steps. A complete Matlab toolbox forperforming this calibration procedure will be availablePhysical camera parameters are commonly divided intoextrinsic and intrinsic parameters. Extrinsic parameters areextrinsic parameters also describe the relationship betweenprinciple of collinearity, where each point in the object -axis respectively. The rotations are per-that is twice rotated during the previous stages.In order to express an arbitrary object point at locationby using the following matrix equation:effective focal length, scale factornate system is in the upper left corner of the image array.The unit of the image coordinates is pixels, and thereforecoefÞcients andunits to pixels. These coefÞcients can be typically obtainedfrom the data sheets of the camera and framegrabber. Infact, their precise values are not necessary, because they are and the scale factor) to the image plane is expressed asThe corresponding image coordinates in pixels areobtained from the projection by applying the follow- by applying the follow-using the following expression:(4)wherek1,k2,... are coefficients for radial distortion, and. Typically, one or two coefficients areenough to compensate for the distortion.Centers of curvature of lens surfaces are not alwaysstrictly collinear. This introduces another common distor-and tangential component [7]. The expression for the tan-gential distortion is often written in the following form:Other distortion types have also been proposed in the lit-erature. For example, Melen [5] uses the correction termfor linear distortion. This term is relevant if the image axesdesign and manufacturing, as well as camera assembly..A proper camera model for accurate calibration can bederived by combining the pinhole model with the correc-is augmented with the distortion coefÞcients. These parameters are also known as physical cam-era parameters, since they have a certain physical meaning.Generally, the objective of the explicit camera calibrationprocedure is to determine optimal values for these parame-ters based on image observations of a known 3-D target. Inthe case of self-calibration the 3-D coordinates of the targetpoints are also included in the set of unknown parameters.However, the calibration procedure presented in this articleis performed with a known target. --- ui'vi'÷iv÷ii'vi'Dusuu÷iDvv÷iu0v0+= uirviru÷ik1ri2k2ri÷ik1ri2k2ri riu÷i2v÷i2+= uitvit2p1u÷iv÷ip2ri22u÷i2+p1ri22v÷i2+p2u÷iv÷i+= uiviDusuu÷iuirit++vv÷ivirit++0v0+= The direct linear transformation (DLT) was originallydeveloped by Abdel-Aziz and Karara [1]. Later, it wasrevised in several publications, e.g. in [5] and [3].The DLT method is based on the pinhole camera modelsists of two steps. In the Þrst step the linear transformation) is solved. Using a homogeneous 3 x 4 matrix repre- the following equation can be writ-We can solve the parameters of the DLTThe following matrix equation for control points is ob- control points is ob-(8)By replacing the correct image points (ui,vi) withobserved values (in a least squares fashion. In order to avoid atrivial solution = 0, a proper normalization must = 0, a proper normalization musta34= 1. Then, the equation can be solved with a pseudoin-verse technique. The problem with this normalization isthat a singularity is introduced, if the correct value of= 1 Faugeras and Toscani [3]suggested the constraint which is sin- do not have any physicalmeaning, and thus the Þrst step where their values are esti-bration stage. There are techniques for extracting some ofthe physical camera parameters from the DLT matrix, butnot many are able to solve all of them. Melen [5] proposeda method based on RQ decomposition where a set of elevenphysical camera parameters are extracted from the DLTmatrix. The decomposition is as follows: is an overall scaling factor and the matrices and define the rotation and translation from the object coordi- contain the focal lengthaxes. A Þve step algorithm for solving the parameters isgiven in [5] and it not represented here. In this procedure,the scale factor is assumed to be 1. In the case of copla-nar control point structure, the 3 x 4 DLT matrix becomessingular. Thus, a 3 x 3 matrix with nine unknown parame-decomposing the 3 x 3 matrix, but only a subset of physicalcomputationally fast. However, they have at least the fol-lowing two disadvantages. First, lens distortion cannot beincorporated, and therefore, distortion effects are not gen-lem have been presented. For example, Shihxample, Shiha method where the estimation of the radial lens distortioncoefÞcient is transformed into an eigenvalue problem. Thesecond disadvantage of linear methods is more difÞcult tobe Þxed. Since, due to the objective to construct a nonitera-tive algorithm, the actual constraints in the intermediateparameters are not considered. Consequently, in the pres-constraints, and the accuracy of the Þnal solution is rela-tively poor [10]. Due to these difÞculties the calibrationWith real cameras the image observations are alwayscontaminated by noise. As we know, there are various errorcomponents incorporated in the measurement process, buttcompensated for, it is convenient to assume that the error is observations (. In the case of Gaussian noise, the objective func-tion is expressed as a sum of squared residuals: 100000000100000000100000000 La0++1001 001 001 involves applying an iterative algorithm. For this problemthe Levenberg-Marquardt optimization method has beenshown to provide the fastest convergence. However, with-out proper initial parameter values the optimization mayto fail. This problem can be avoided by using the parame-ters from the DLT method as the initial values for the opti-achieved after a few iterations.Two coefÞcients for both radial and tangential distortionis normally enough [4]. Our experiments have also shownnegligible. Thus, the parametersmated. The number of extrinsic parameters depends on thenumber of camera views. Using a 3-D target structure, onlya single viewpoint is required. In the case of a coplanar tar-parameters that can be estimated from a single view. There-fore, multiple views are required in order to solve all theintrinsic parameters. The number of extrinsic parameters isnow added by six for each perspective view.2.3. Correction for the asymmetric projectionPerspective projection is generally not a shape preserv-image plane. Two- and three-dimensional objects with anon-zero projection area are distorted if they are not copla-features, but in this article we are only concerned with cir-reason is that they are very common shapes in many man-the images with subpixel precision, but the distortioncaused by the perspective projection is not typically con-sidered. Perspective projection distorts the shape of the cir-and displacement between the object surface and the imageplane. Only when the surface and the image plane are par-allel, projections remain circular. These facts are well-known, but the mathematical formulation of the problemhas been often disregarded. Therefore, we shall next review the object surface that is located on the surfaceskewed cone, whose boundary curve can be expressed asfollows:Parametersspecify the skewness of the cone in and directions and the parameter speciÞes the sharp-focus to the object surface is denoted by centered in the camera focus, but its z-axis is orthogonal to, and its x- and y-axes are parallel to theimage axesis expressed by using the following rotation:where the vectors,, and form an orthonormal basis. Now, we can ex- and of is expressed as: Figure 1.Perspective projection of a circle. klnprsxylkmnqrtlmpqst++++0 From Eq. (14) the center of the ellipse can beexpressed ascenter, let us consider a situation where the radius of the= 0. Consequently,r, s,the circle center:For non-zero radius ( = 0). Generally, weEllipse Þtting or the center of gravity method producesestimates of the ellipse center. However, what we usuallywant to know is the projection of the circle center. As aconsequence of the previous discussion, we notice that theand (16). Especially, in camera calibration this is veryviewed in skew angles.There are at least two possibilities to correct this projec-tion error. The Þrst solution is to include the correction to the camera model. An optimal estimatein a least squares sense is then obtained. However, thissolution degrades the convergence rate considerably, andbility is to compute the camera parameters recursively,tion step are used to evaluate Eqs (15) and (16). Observedlowing formula:The parameters are not optimal in a least squares sense, butobject. Since the two visible surfaces of the object are per-pendicular there is no way to select the viewing angle sothat the projection asymmetry vanishes. Fig. 2 b) shows theerror in horizontal and vertical directions. The error in thiscase is quite small (about 0.14 pixels peak to peak), but it is3. Image correctionThe camera model given in Eq. (6) expresses the projec-tion of the 3-D points on the image plane. However, it doesnot give a direct solution to the back-projection problem, inwhich we want to recover the line of sight from imagesolution to the inverse mapping. For example, two coefÞ-We can infer from Eq. (18) that a nonlinear search isrequired to recover from. Another alterna-tive is to approximate the inverse mapping. Only few solu-literature, although the problem is evident in many applica-tions. Melen [5] used an iterative approach to estimate theundistorted image coordinates. He proposed the followingtwo-iteration process:where vectors and contain the distorted and the cor-kpnllqpmkslrtlmsnsprwpqskpnlkslrnsprkpnlmnkqkslrmrktnsprqrntkpnlkslrnspr--------------------lqpmkpnlmnkqkpnl 100 200 300 400 500 600 0.025 0.02 0.015 0.01 0.005 0 Yaxis [pixels]Error [pixels] 0 100 200 300 400 500 600 700 0.1 0.05 0 0.05 0.1 Xaxis [pixels]Error [pixels] Figure 2.a) A view of the calibration object. b) Errorcaused by the asymmetrical dot projection.+++ A few implicit methods e.g. a two-plane method as pro-posed by Wei and Ma [9] solve the back-projection prob-parameters to compensate for the distortion. Due to a largenumber of unknown parameters, this technique requires adense grid of observations from the whole image plane inorder to become accurate. However, if we know the physi-cal camera parameters based on explicit calibration, it ispossible to solve the unknown parameters by generating adense grid of points and calculating the correspond-ing distorted image coordinates by using the cam-proposed by Wei and Ma [9] we can express the mappingfrom to as follows:Wei and Ma used third order polynomials in their exper-iments. In our tests, we noticed that it only provides about0.1 pixel accuracy with typical camera parameters. This isquite clear, since we have a camera model that containswhere each set of unknown parameters includes 21terms. It can be expected that there are also redundanttions, it was found that the following expression compen-was less than 0.01 pixel units, even with a substantialwhere,, andThe model (21)-(22) contains only eight unknownthis model will require less computation than the iterativeapproach suggested by Melen giving also more accurate can be solved either itera-tively using the least squares technique, when the smallestÞtting residual is obtained, or directly, when the result isvery close to the optimal.In order to solve the unknown parameters for the inverse tie-points and covering the wholetion based on the generated coordinates andExplicit camera calibration experiments are reported inxperiments are reported inthe image correction. Let us assume that the Þrst three stepshave produced the physical camera parameters listed inTable 1.tie-points that cover the entire image and a smallportion outside the effective area so that we can guaranteegood results also for the border regions. The correspondingdistorted coordinates are obtained by applying Eqs are then solved withthe LS method in Eq. (24). The results are given in Table 2,and the Þtting residual between the inverse model and thetrue points is shown in Fig. 3.less than 0.0005 pixels. For more intensive distortion, thejkNjkNjkNjkN++++++ +++ u [mm] [pixels] [pixels]1.00398.3431367.6093305.8503 1[mm-2] 2[mm-4] 1[mm-1] -1]-3.186e-034.755e-05-3.275e-05-1.565e-05Table 1. Physical camera parameters. 1 2 3 -8.328e-031.670e-043.269e-061.568e-05 5 6 7 2.202e-04-1.518e-07-3.428e-08-1.151e-02Table 2. Parameters of the inverse model. eTp error will be slightly bigger, but under realistic conditionsalways less than 0.01 pixels as the feature detection accu-racy (std) was about 0.02 pixels [4].In the second experiment, we generate a uniformly dis-tributed random set of 2000 points in the image area. Thesepoints are Þrst distorted and then corrected with the inverseas histograms in Fig. 4 in both horizontal and verticaldirections. The error seems to have the same magnitude asthe Þtting residual. Therefore, we can afÞrm that the inter-polation between the tie-points does not degrade imagecorrection noticeably.A four-step procedure for camera calibration was pre-sented in this article. This procedure can be utilized in vari-ous machine vision applications, but it is most beneÞcial inwhere high geometrical accuracy is needed. This procedureuses explicit calibration methods for mapping 3-D coordi-image correction. The experiments in the last sectionshowed that the error caused by the inverse model is negli-cedure is implemented and it will be available through theAcknowledgmentsThe support of The Graduate School in Electronics Tel-ecommunications and Automation (GETA) is gratefullyacknowledged.References[1]Abdel-Aziz, Y. I. & Karara, H. M. (1971) Direct linearphotogrammetry. Proc. Symposium on Close-Range Photo-grammetry, Urbana, Illinois, p. 1-18.[2]Faig, W. (1975) Calibration of close-range photogrammet-[3]Faugeras, O. D. & Toscani, G. (1987) Camera calibrationfor 3D computer vision. Proc. International Workshop onIndustrial Applications of Machine Vision and MachineIntelligence, Silken, Japan, p. 240-247.[4]HeikkilŠ, J. & SilvŽn, O. (1996) Calibration procedure forshort focal length off-the-shelf CCD cameras. Proc. 13thInternational Conference on Pattern Recognition. Vienna,[5]Melen, T. (1994) Geometrical modelling and calibration ofvideo cameras for underwater navigation. Dr. ing thesis,Norges tekniske h¿gskole, Institutt for teknisk kybernetikk.[6]Shih, S. W., Hung, Y. P. & Lin, W. S. (1993) Accurate linearby solving an eigenvalue problem. Optical Engineering[7]Slama, C. C. (ed.) (1980) Manual of Photogrammetry, 4thed., American Society of Photogrammetry, Falls Church,Virginia.[8]Tsai, R. Y. (1987) A versatile camera calibration techniquefor high-accuracy 3D machine vision metrology using off-[9]Wei, G. Q. & Ma, S. D. (1993) A complete two-plane cam-era calibration method and experimental comparisons. Proc.4th International Conference on Computer Vision, Berlin,Germany, p. 439-446.[10]Weng, J., Cohen, P. & Herniou, M. (1992) Camera calibra-tion with distortion models and accuracy evaluation. IEEETransactions on Pattern Analysis and Machine IntelligencePAMI-14(10): 965-980. 200 400 600 0 100 200 300 400 500 0 0.2 0.4 0.6 0.8 1x 103 axisvaxiserror in pixels Figure 3.Fitting residual. samples 4 3 2 1 0 1 2 3x 104 0 20 40 60 80 100 120 140 160 180 200 samples 2.5 2 1.5 1 0.5 0 0.5 1 1.5 2 2.5x 104 0 20 40 60 80 100 120 140 160 180 200 Figure 4.Error caused by the back-projection modelfor 2000 randomly selected point in horizontal direc-tion and vertical direction.