Automatic Camera Calibration Algorithm Development, 10-R8546
Inclusive Dates: 04/01/15 – 08/01/15
Background — Vision-based perception is already very common in production automobiles, supporting active safety and driver assist functions; however, these camera systems are primarily monocular. OEMs and suppliers are currently considering stereo cameras for use in next-generation vehicles. Stereo cameras enable measurement of range/depth to objects, which facilitates improved ability to detect and recognize vehicles and other objects, determines the distance to objects, and extracts drivable surfaces as it navigates through various environments. Such technologies are becoming necessary to enable developing highly automated and autonomous vehicles. Stereo cameras must be precisely aligned relative to each other to produce accurate disparity images. This project applied time-lapsed motion of the stereo cameras to capture multiple views of a natural scene to extract and correlate three-dimensional feature points available within the scene. The stereo cameras provided synchronized frames where opportunistic features were extracted over time and motion within the scene and used to find matching correspondences between the two cameras. The calibration pipeline focused on improving the rectification of the stereo images for better alignment between the intrinsic projective geometry of stereo camera images.
Approach — Traditional camera calibration involves the use of a calibration pattern of known dimensions to be held in front of the cameras by an operator. The challenges with this tedious approach are that it requires one or more trained operators for positioning the board and ensuring suitable coverage of the camera field of view, and there is no specific methodology that generates a reproducible calibration. A calibration architecture was developed to automatically detect "opportunistic" three-dimensional features from the environment to match between stereo cameras and track the features over multiple frames. At each set of synchronized frames, we matched features in the left camera's image against those in the right camera's image to find the inlier feature point correspondences. The matched features are computed as the vehicle traverses the scene to generate the projective rectification homographies to transform the left and right images such that their epipolar lines are aligned. The results provide an improved transformed right image to improve the block matching for generating the disparity image.
Accomplishments — The resulting camera calibration architecture provides an improved capability to develop robust automatic calibration systems of interest to military, agricultural, and automotive clients to reduce costs due to complete recalibration.