Paper
1 June 2020 Self-supervised depth completion with attention-based loss
Author Affiliations +
Proceedings Volume 11515, International Workshop on Advanced Imaging Technology (IWAIT) 2020; 115152T (2020) https://doi.org/10.1117/12.2566222
Event: International Workshop on Advanced Imaging Technologies 2020 (IWAIT 2020), 2020, Yogyakarta, Indonesia
Abstract
Deep completion which predicts dense depth from sparse depth has important applications in the fields of robotics, autonomous driving and virtual reality. It compensates for the shortcomings of low accuracy in monocular depth estimation. However, the previous deep completion works evenly processed each depth pixel and ignored the statistical properties of the depth value distribution. In this paper, we propose a self-supervised framework that can generate accurate dense depth from RGB images and sparse depth without the need for dense depth labels. We propose a novel attention-based loss that takes into account the statistical properties of the depth value distribution. We evaluate our approach on the KITTI Dataset. The experimental results show that our method achieves state-of-the-art performance. At the same time, ablation study proves that our method can effectively improve the accuracy of the results.
© (2020) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Yingyu Wang, Yakun Ju, Muwei Jian, Kin-Man Lam, Lin Qia, and Junyu Dong "Self-supervised depth completion with attention-based loss", Proc. SPIE 11515, International Workshop on Advanced Imaging Technology (IWAIT) 2020, 115152T (1 June 2020); https://doi.org/10.1117/12.2566222
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
RGB color model

Signal attenuation

Cameras

Data acquisition

Computer programming

Convolution

Image fusion

Back to Top