This paper presents a calibration algorithm that does not require an artificial target object to precisely estimate a rigid-body transformation between a camera and a light detection and ranging (LIDAR) sensor. The proposed algorithm estimates calibration parameters by minimizing a cost function that evaluates the edge alignment between two sensor measurements. In particular, the proposed cost function is constructed using a projection model-based many-to-many correspondence of the edges to fully exploit measurements with different densities (dense photometry and sparse geometry). The alignment of the many-to-many correspondence is represented using the Gaussian mixture model (GMM) framework. Here, each component of the GMM, including weight, displacement, and standard deviation, is derived to suitably capture the intensity, location, and influential range of the edge measurements, respectively. The derived cost function is optimized by the gradient descent method with an analytical derivative. A coarse-to-fine scheme is also applied by gradually decreasing the standard deviation of the GMM to enhance the robustness of the algorithm. Extensive indoor and outdoor experiments validate the claim that the proposed GMM strategy improves the performance of the proposed algorithm. The experimental results also show that the proposed algorithm outperforms previous methods in terms of precision and accuracy by providing calibration parameters of standard deviations less than 0.6° and 2.1 cm with a reprojection error of 1.78 for a 2.1-megapixel image (2,048 × 1,024) in the best case.
- sensor networks
ASJC Scopus subject areas
- Control and Systems Engineering
- Computer Science Applications