Paper
8 November 2023 Dense mapping of large outdoor scenes based on Pseudo-LiDAR
Qingling Yue, Shuxu Jing, Shengen Li, Yixin Zhang
Author Affiliations +
Proceedings Volume 12923, Third International Conference on Artificial Intelligence, Virtual Reality, and Visualization (AIVRV 2023); 129232L (2023) https://doi.org/10.1117/12.3011448
Event: 3rd International Conference on Artificial Intelligence, Virtual Reality and Visualization (AIVRV 2023), 2023, Chongqing, China
Abstract
In order to solve the limitations of SLAM system when using RGB-D camera to achieve dense mapping of large outdoor scenes, this paper proposes a visual SLAM dense mapping system based on Pseudo-LiDAR. First, this method does not require a depth sensor and generates Pseudo-LiDAR by a separate module to provide geometric information to the system. The input stereo images are used to generate depth map by employing depth estimation network PSMNet, and the coordinates of each pixel point in the 3D world are calculated by combining the a priori camera parameters, which is Pseudo-LiDAR. Next, the Pseudo-LiDAR and stereo images are input to SLAM for pose estimation using the surfel representation to further achieve dense mapping. The ideas in this paper are experimentally validated on a large outdoor KITTI dataset and demonstrate a more robust dense mapping effect.
(2023) Published by SPIE. Downloading of the abstract is permitted for personal use only.
Qingling Yue, Shuxu Jing, Shengen Li, and Yixin Zhang "Dense mapping of large outdoor scenes based on Pseudo-LiDAR", Proc. SPIE 12923, Third International Conference on Artificial Intelligence, Virtual Reality, and Visualization (AIVRV 2023), 129232L (8 November 2023); https://doi.org/10.1117/12.3011448
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
3D modeling

LIDAR

Point clouds

Depth maps

Pose estimation

Sensors

Visualization

Back to Top