Abstract
To enhance the accuracy and stability of target ranging in autonomous target following for intelligent vehicles, this study proposes a target ranging method based on the fusion of LiDAR and camera data. First, to improve the clustering performance of the traditional Density-Based Spatial Clustering of Applications with Noise (DBSCAN) algorithm, which suffers from low quality due to a fixed neighborhood radius parameter, this study analyzes the relationship between point cloud density and LiDAR resolution in both horizontal and vertical directions at varying distances. Based on this analysis, an adaptive density-based clustering algorithm with a locally adaptive radius is developed. Compared to the traditional approach, the proposed method achieved an 8.5% increase in correct clustering rate while reducing missed clustering, over-clustering, and under-clustering rates by 6.1%, 1.9%, and 0.5%, respectively. Second, to address the low localization accuracy of cameras and the limitations of LiDAR’s in target classification, an object detection fusion algorithm is employed to associate LiDAR points with image bounding boxes. Additionally, to mitigate cumulative errors affecting the final calibration results in the joint calibration of LiDAR and camera, an optimized selection of calibration samples is introduced to enhance calibration robustness. Finally, the precise positioning information of the detected target is obtained by extracting the corresponding point cloud clusters within the image detection frame. The proposed target ranging method is validated through simulation testing on benchmark datasets and real-world experiments, demonstrating both the effectiveness of the algorithm and its strong transferability to real-world environments.
Get full access to this article
View all access options for this article.
