23 November 2020 Learning multiscale spatial context for three-dimensional point cloud semantic segmentation
Yang Wang, Shunping Xiao
Author Affiliations +
Abstract

Semantic segmentation of three-dimensional (3D) scenes is a challenging task in 3D scene understanding. Recently, deep learning-based segmentation approaches have made significant progress. A multiscale spatial context feature learning is used in an end-to-end approach for 3D point cloud semantic segmentation. Furthermore, a local feature fusion learning block is then introduced to the hidden layers in the network to improve its feature learning capability. In addition, features learned in several different layers are fused for further improvement. Based on these strategies, end-to-end architectures are finally designed for 3D point cloud semantic segmentation. Several experiments conducted on three publicly available datasets have clearly shown the effectiveness of the proposed network.

© 2020 SPIE and IS&T 1017-9909/2020/$28.00 © 2020 SPIE and IS&T
Yang Wang and Shunping Xiao "Learning multiscale spatial context for three-dimensional point cloud semantic segmentation," Journal of Electronic Imaging 29(6), 063005 (23 November 2020). https://doi.org/10.1117/1.JEI.29.6.063005
Received: 12 May 2020; Accepted: 26 October 2020; Published: 23 November 2020
Lens.org Logo
CITATIONS
Cited by 1 scholarly publication.
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Clouds

Image segmentation

3D image processing

Feature extraction

RGB color model

3D modeling

Neural networks

Back to Top