|
1.IntroductionNowadays, vision-based measurement and control systems are widely used in many fields such as three-dimensional (3-D) reconstruction, manufacturing, motion estimation, and surveillance. These systems always include multiple cameras working cooperatively. As a basic knowledge in multivision system, geometrical relationships between cameras have been described in Refs. 12.3.–4. In order to recover the relationships, traditional solutions5–8 usually place a calibration object with matching features in the cameras’ overlapped field of view (FOV). Using these methods, both intrinsic and extrinsic camera parameters can be well estimated. Considering full view or large-scale vision measurements, a common situation is to deal with cameras with nonoverlapped FOV. Due to lack of FOV, it seems to be impossible to obtain the feature correspondences when using traditional calibration methods. Therefore, calibration for nonoverlapping cameras is an important and challenging work. Recently, several methods have been presented to solve the problem. A commonly used approach9–11 is based on large-scale surveying equipment such as theodolites or laser trackers. With these types of equipment, 3-D points of multiple calibration objects for nonoverlapping cameras can be easily obtained. These methods require complex operation and high precision of equipment, which is ponderous and inconvenient, especially for field calibrations. Moreover, the cost of these kinds of equipment is prohibitive. Besides, nonstandard calibration objects which can be “seen” by multiple nonoverlapping cameras are applied in some researches. For example, Liu et al. and Zhang et al.12,13 separately use a long one-dimensional target and two planar targets fixed together to calibrate cameras with nonoverlapping configurations. In these methods, the target can be freely moved and each camera only needs partial views of the target. The main restriction in practice is the stability and precision of these large targets. In vision-based robotics, Lebraly et al.14 use a planar mirror to create an overlap between views of the different cameras. The impact of the mirror refraction is also studied in the calibration algorithms. Their method is effective and easy to carry out. However, in order to avoid degeneracy, the mirror needs to be placed delicately and the calibration object needs to be small which leads to less precision. In vision-based surveillance, structure from motion has been studied and applied to calibrate multiple cameras.15–18 In these methods, targets’ trajectories need to be estimated based on the motion model generated from the measured positions in the FOV of each sensor. The relative orientation and location of the cameras are calculated using the observed and estimated target position. These methods are suitable for large-scale surveillance networks, but the calibrations need scene information which is hard to obtain in industrial measurements, and the precision remains to be improved. Dealing with the problem, previous study19 utilizes pairs of skew laser lines, which achieve calibration of nonoverlapping cameras. However, as the laser lines need to be directed to the range of the respective cameras, large numbers of line lasers should be added in the system when the cameras’ number increases, which is inconvenient in practical application. In this paper, a novel calibration method using light planes is proposed. The light planes can be generated by a line laser projector or a rotary laser level, as the calibration objects. The coplanarity of light planes provides constraints which are used to recover the camera geometry. Compared to laser lines, the image of laser planes contains more information, which can increase the accuracy of feature extracting and laser planes can cover a larger space, which is more flexible and suitable for field calibrations. The remainer of this paper is organized as follows. A brief introduction to the camera model and projective transformation is presented in Sec. 2. Section 3 details the calibration method. Main principle and coplanarity constraint are shown in Sec. 3.1. The method of light plane 3-D reconstruction is given in Sec. 3.2. Section 3.3 describes the procedure of camera geometry estimation. Section 4 provides the results on both synthetic and real data. The conclusions are given in Sec. 5. 2.NotationsIn this paper, a two-dimensional (2-D) image point is denoted by , a 3-D world coordinates point by . The corresponding homogeneous coordinates are indicated by and . Based on pinhole camera model, the mapping of 3-D world coordinates point to 2-D image point is described as where is an arbitrary scale factor that is not equal to 0. is called the intrinsic matrix which contain five parameters: and are the scale factors in the image axes and , is the principal point, and is the skew of the two image axes which in practice is almost always set to 0. , called the extrinsic matrix, is composed of a rotation matrix and a translation vector from world coordinates to camera coordinates.If the world coordinate is established on a plane (-axis was perpendicular), then the point on the plane is . Let us redefine as and denote the ’th column of the rotation matrix by . From Eq. (1), we have According to the projective geometry, this plane to plane mapping can also be expressed by a projective transform where is a homography matrix defined up to a scale factor. Let us denote the ’th column of by . From Eqs. (2) and (3), we have If and are known, then the extrinsic matrix is readily computed. From Eq. (4), we have with .3.Method3.1.Main Principle and Coplanarity ConstraintA fixed light plane in space can be expressed as different planar equations in respective camera coordinate frames due to the different orientation and position of each camera. Inversely, after applying rigid transforms which represent the geometry between different cameras, the individual planes should coincide with each other. This is what we called coplanarity constraint. Based on this fact, camera geometries can be recovered by placing the line laser projector and reconstructing the light plane several times. Without loss of generality, two cameras are taken as an instance to interpret the principle, for multiple cameras can be disassembled into several couples. The principle scheme for the calibration setup is illustrated in Fig. 1. These two cameras are set up in the measuring field without any overlapped FOV according to their orientations and positions. Let us denote two cameras by Camera 1 and Camera 2, and are the camera coordinate frames, respectively. The geometry transform matrix between the cameras is denoted by . A line laser projector is employed into the field which projects a large light plane, denoted by . The projector is set to a position so that the light plane can intersect with both cameras’ view. In order to help the light plane to be seen and reconstructed, a planar pattern board is placed in front of each camera. Thus a laser line is projected on each planar board. By taking images of the planar board in different positions, the equation of plane in each camera coordinates can be obtained. A plane can also be defined by a point and a normal vector. As shown in Fig. 2, the plane expressed in frame is denoted by and in frame by . After the rigid transformation under , in frame is denoted by in frame . Since and represent the same plane but in different coordinate frames, and should coincide with each other. Then, we have with and , which yields Here, we get two constraints on geometry transformation matrix , given one light plane. For solving rotation matrix , at least two constraints like Eq. (8) are needed and for translation vector , at least three constraints like Eq. (9) are needed, which means at least three light planes are needed to solve . Moreover, the light planes are required not to be parallel with each other, since the parallels provide duplicate constraints.3.2.Light Plane 3-D ReconstructionBased on the principle mentioned above, the first step of our method is to reconstruct the light plane in each camera’s coordinates. It is almost a calibration problem of structured light vision which can be found in related literatures.20,21 Here is our solution:
3.3.Camera Geometry EstimationThis section details the camera geometry estimation procedure. Suppose light planes are reconstructed using the solution mentioned above. Then, we get constraints on rotation matrix ; thus, can be estimated by minimizing the following least-squares quantity derived from Eq. (8) It is a nonlinear minimization problem due to the orthogonality of . Without employing any nonlinear iterative algorithms, we linearize the problem by mapping rotation matrices to unit quaternions.23 Suppose, the quaternion is defined as four-dimensional vector . Then, we minimize the following quantity where is an anti-symmetric matrix with respect to a given vector. For any vector , is defined asThe problem can be solved by eigenvalue method. The solution is the eigenvector of associated with the smallest eigenvalue. After the best is estimated, can be computed from Similar to the rotation, we also get constraints on translation vector . Once is solved, can be estimated by minimizing the following least-squares quantity derived from Eq. (9) The one of requirements when minimizing Eq. (16) is which can be rewritten as Thus, is estimated by using singular value decomposition or pseudoinverse.4.Experiment4.1.Synthetic DataThe proposed method is carried out with synthetic data to test the performance in the presence of noise. The synthetic data are created by use of two simulated cameras which have the following properties: , , and . The image resolution is . The rotation (in Euler angles) of the cameras is set as (, 65, ) and the translation vector is . In this experiment, five randomly light planes are generated. Gaussian noise with 0 mean and standard deviation is added to the image points. Then, the estimated geometry is compared with the ground truth. We vary the noise level from 0 to 1.5 pixels. For each noise level, we perform 100 independent trials and average the results. Figure 3 shows the errors in the recovery of the camera geometry. All errors increase linearly with the noise level. Technically, the light plane is supposed to be absolutely flat, but it is slightly not in practice, especially when generated by an off-the-shelf line laser projector. Simply, we use a sector of quadratic cone for modeling the light plane distorted by lens of the projector, where is the semiapex angle. According to most specifications of the laser projectors, the curvature of the laser line is no more than at 5 m, which means is bigger than 89.912 deg for modeling a common projector. Based on this curvature model, our method is applied with distorted data. We vary laser line curvature from to . For each given curvature, Gaussian noise with mean 0 and standard deviation 0.2 pixels is added to the image points, and 100 independent trials are performed. The averaged results are shown in Fig. 4. When the curvature is in the range of , the relative errors are around 0.8% which is a little worse than the results with just random noise. Even with the curvature in the range of , the relative errors are no more than 3%, which hardly happens in practice. In order to investigate the performance with respect to the distance of the cameras, the third experiment is carried out. Most of the parameters are maintained except translation vector. The distance is varied from 0.5 to 10 m. For each distance, Gaussian noise with mean 0 and standard deviation 0.2 pixels is added to the image points, curvature at 5 m is also applied to the light plane, and 100 independent trials are performed. The averaged results are shown in Fig. 5. The distance almost has no influence on rotation error but translation error increases. The reason is that when calibrating widely separated cameras, in order to ensure that every camera can “see” all the light planes, the orientational variation of light plane is restricted to a small range. In another word, the normal vectors of all light planes have slight differences. This results in degenerate configurations especially in the computation of translation vector. Actually, the condition number of matrix in Eq. (18) becomes poor gradually with the shrinking range of changes in orientation, which means the results are more sensitive to the noise. Despite this degeneracy our method is still usable. Rotation error is hardly changed and all are below 0.005 deg for all trials. For , the baseline error is . And for , the baseline error is around 2 mm. This is adequate for most practical applications. 4.2.Real DataThe method is used to calibrate a nonoverlapped two cameras vision system, which is shown in Fig. 6. The system consists of two CMOS cameras (Aigo DLC-130) with 12-mm lens. The imager resolution is . The baseline of two cameras is about 1000 mm. The light plane is generated by an ordinary line laser projector. The chessboard contains a pattern of squares and the distance between the near square corners is 30 mm. The laser projector is placed under six random positions and orientations to generate six light planes. For each light plane, the chessboard is moved three times in front of each camera. Figure 7 shows the estimated geometry of cameras and reconstructed light planes. In order to evaluate the calibration stability, we also applied our method to all quintuple combinations of six light planes. The results are shown in Table 1. The results are very consistent with each other and standard deviations of all the parameters are very small, which indicate that the proposed method is stable. Table 1Stability of results in all quintuples of light planes.
In order to evaluate the calibration accuracy, the vision system is also calibrated by a double theodolites based method9 which utilizes two Leica T1800 theodolites (angle measurement accuracy in.). Both results are listed in Table 2. The results of the two methods are comparable, the angle difference is and the baseline difference is . Table 2Comparison with double theodolites based method.
5.ConclusionIn this paper, a calibration method for nonoverlapping cameras is presented. A large light plane which can be generated by an ordinary line laser projector or a rotary laser level is utilized as a calibration object. The method does not require any overlapping camera configuration. Benefitting from the “no mass” and “nonsolid” qualities, the light plane can be freely placed and easily made partly available within all cameras’ views, which makes the method more flexible and suitable for field calibrations. The experimental results with synthetic data show that the proposed method is robust to noise and can be used for a large-scale calibration. Also, results with real data show the impressing reliability and accuracy which are comparable to traditional double theodolites based method. AcknowledgmentsThis research has been supported by the National Natural Science Foundation of China under Grant Nos. 61275162 and 51175027. ReferencesR. HartleyA. Zisserman, Multiple View Geometry in Computer Vision, 2nd ed.Cambridge University Press, Cambridge
(2003). Google Scholar
F. KahlR. Hartley,
“Multiple-view geometry under the Linfin-norm,”
IEEE Trans. Pattern Anal. Mach. Intell., 30
(9), 1603
–1617
(2008). http://dx.doi.org/10.1109/TPAMI.2007.70824 ITPIDJ 0162-8828 Google Scholar
O. FaugerasQ.-T. Luong, The Geometry of Multiple Images, MIT Press, Massachusetts
(2001). Google Scholar
A. Heyden,
“Geometry and algebra of multiple projective transformations,”
Lund University, Sweden,
(1995). Google Scholar
K. D. GrembanC. E. ThorpeT. Kanade,
“Geometric camera calibration using systems of linear equations,”
in IEEE Int. Conf. Robotics and Automation,
562
–567
(1988). Google Scholar
R. Tsai,
“A versatile camera calibration technique for high accuracy 3D machine vision metrology using off-the-shelf TV cameras and lenses,”
IEEE J. Rob. Autom., 3
(4), 323
–344
(1987). http://dx.doi.org/10.1109/JRA.1987.1087109 IJRAE4 0882-4967 Google Scholar
Z. Y. Zhang,
“A flexible new technique for camera calibration,”
IEEE Trans. Pattern Anal. Mach. Intell., 22
(11), 1330
–1334
(2000). http://dx.doi.org/10.1109/34.888718 ITPIDJ 0162-8828 Google Scholar
B. CaprileV. Torre,
“Using vanishing points for camera calibration,”
Int. J. Comput. Vis., 4
(2), 127
–139
(1990). http://dx.doi.org/10.1007/BF00127813 IJCVEQ 0920-5691 Google Scholar
R. S. LuY. F. Li,
“A global calibration method for large-scale multi-sensor visual measurement systems,”
Sens. Actuators, A, 116
(3), 384
–393
(2004). http://dx.doi.org/10.1016/j.sna.2004.05.019 SAAPEB 0924-4247 Google Scholar
J. Beraldinet al.,
“Object model creation from multiple range images: acquisition, calibration, model building and verification,”
in Proc. Int. Conf. Recent Advances in 3-D Digital Imaging and Modeling,
326
–333
(1997). Google Scholar
B. YingZ. HanqiZ. S. Roth,
“Experiment study of PUMA robot calibration using a laser tracking system,”
in IEEE Int. Workshop on Soft Computing in Industrial Applications,
139
–144
(2003). Google Scholar
Z. Liuet al.,
“Novel calibration method for non-overlapping multiple vision sensors based on 1D target,”
Opt. Laser Eng., 49
(4), 570
–577
(2011). http://dx.doi.org/10.1016/j.optlaseng.2010.11.02 OLENDN 0143-8166 Google Scholar
G.J. Zhanget al.,
“Novel calibration method for a multi-sensor visual measurement system based on structured light,”
Chin. J. Mech. Eng., 25
(2), 405
–410
(2012). http://dx.doi.org/10.3901/CJME.2012.02.285 CHHKA2 0577-6686 Google Scholar
P. Lebralyet al.,
“Flexible extrinsic calibration of non-overlapping cameras using a planar mirror: application to vision-based robotics,”
in Int. Conf. Intelligent Robots and Systems (IROS),
5640
–5647
(2010). Google Scholar
E. SandroW. FelixK. Reinhard,
“Calibration of a multi-camera rig from non-overlapping views,”
Pattern Recogn., 4713 82
–91
(2007). http://dx.doi.org/10.1007/978-3-540-74936-3 PTNRA8 0031-3203 Google Scholar
N. AnjumM. TajA. Cavallaro,
“Relative position estimation of non-overlapping cameras,”
in IEEE Int. Conf. Acoustics, Speech and Signal Processing,
281
–284
(2007). Google Scholar
C. StaufferT. Kinh,
“Automated multi-camera planar tracking correspondence modeling,”
in IEEE Computer Society Conf. Computer Vision and Pattern Recognition,
259
–266
(2003). Google Scholar
A. RahimiB. DunaganT. Darrell,
“Simultaneous calibration and tracking with a network of non-overlapping sensors,”
in IEEE Comput. Vis. Pattern Recognit.,
187
–194
(2004). Google Scholar
Q. Z. Liuet al.,
“Global calibration method of multi-sensor vision system using skew laser lines,”
Chin. J. Mech. Eng., 25
(2), 405
–410
(2010). http://dx.doi.org/10.1117/1.3366667 OPEGAR 0091-3286 Google Scholar
J. A. Munoz Rodrıguez,
“Laser imaging and approximation networks for calibration of three-dimensional vision,”
Opt. Laser Technol., 43
(3), 491
–500
(2011). http://dx.doi.org/10.1016/j.optlastec.2010.05.020 OLTCAS 0030-3992 Google Scholar
F. Q. ZhouG. J. Zhang,
“Complete calibration of a structured light stripe vision sensor through planar target of unknown orientations,”
Image Vis. Comput., 23
(1), 59
–67
(2005). http://dx.doi.org/10.1016/j.imavis.2004.07.006 IVCODK 0262-8856 Google Scholar
C. Steger,
“An unbiased detector of curvilinear structures,”
IEEE Trans. Pattern Anal. Mach. Intell., 20
(2), 113
–125
(1998). http://dx.doi.org/10.1109/34.659930 ITPIDJ 0162-8828 Google Scholar
B. K. P. Horn,
“Closed-form solution of absolute orientation using unit quaternions,”
J. Opt. Soc. Am. A, 4
(4), 629
–642
(1987). http://dx.doi.org/10.1364/JOSAA.4.000629 JOAOD6 0740-3232 Google Scholar
BiographyQianzhe Liu received his PhD degree from the School of Instrumentation Science and Opto-electronics Engineering at Beihang University, China, in 2012. He is currently a lecturer in the School of Instrumentation Science and Opto-electronics Engineering, Beijing Information Science & Technology University, China. His research interests are computer vision and optical fiber sensing. Junhua Sun received his PhD degree from the School of Instrumentation Science and Opto-electronics Engineering at Beihang University, China, in 2006. He is currently an associate professor in the School of Instrumentation Science and Opto-electronics Engineering, Beihang University, China. His research interests are precision measurement and machine vision. Yuntao Zhao received his BS degree from the School of Instrumentation Science and Opto-electronics Engineering at Beihang University, China, in 2006. He is currently pursuing the MS degree in the School of Instrumentation Science and Opto-electronics Engineering, Beihang University, China. His research interests are precision measurement and machine vision. Zhen Liu received his PhD degree from the School of Instrumentation Science and Opto-electronics Engineering at Beihang University, China, in 2010. Since 2010, he has been a lecturer in the School of Instrumentation Science and Opto-electronics Engineering, Beihang University, China. His research interests are laser precision measurement and machine vision. |