Presentation + Paper
8 November 2020 Self-calibration of sensors using point cloud feature extraction
Author Affiliations +
Abstract
As Autonomous Vehicles (AVs) progress to become more prevalent on the roads, there is an increased focus on reliability of the underlying sensor technology supporting the decisions made by the AI. Continuous calibration of visual sensors, such as LiDAR and camera are essential for the commercial development and societal acceptance of fully autonomous vehicles (AVs). As we move towards full autonomy it is reasonable to demand that sensor tolerances should be minimized. The traditional methods that are used today are time consuming, require extensive setup and configuration, and as a result are prone to errors due to the reliance on human intervention. Furthermore, there is the risk of sensor errors creeping into the system due to the one-time nature of calibration, and the environmental factors that can impact sensors during the operation of an autonomous vehicle. Recently, various target-less calibration methods have been proposed involving traditional feature picking as well as using deep learning-based approaches but still require initial calibration parameters in order to be effective. Our method need not rely on getting initial calibration parameters from visual frames or using ground truth values. This approach is more versatile for calibration in AVs and subsequently improves the reliability when it comes to navigating and decision making on the road. We propose an improvement on our previously reported robust continuous calibration approach that uses one or many objects identified in the visual frames and calibrating one visual sensor with respect to the other without relying on the initial calibration parameters or ground truth values. Our approach extracts the object point cloud data (PCD), downsamples and cost-optimizes the extracted feature points to compute the extrinsic calibration parameters. Without initial calibration parameters, our method uses multiple frames to calibrate a sensor with respect to any other sensor provided that PCD can be generated or derived from the sensors. Our approach performs significantly better with no initial calibration values provided and thus can be applied continuously to calibrate any sensors that become mis-calibrated during the operation of the AV. We have tested our method on the publicly available KITTI dataset and are benchmarking our results against state-of-the-art methodologies. Our goal is to automate the process of object detection and point cloud extraction and test the speed and accuracy.
Conference Presentation
© (2020) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Pradeep Anand Ravindranath, Kutluhan Buyukburc, and Ali Hasnain "Self-calibration of sensors using point cloud feature extraction", Proc. SPIE 11525, SPIE Future Sensing Technologies, 115250M (8 November 2020); https://doi.org/10.1117/12.2580630
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Calibration

Sensors

Clouds

Feature extraction

Visualization

Sensor calibration

Cameras

Back to Top