The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences
Publications Copernicus
Download
Citation
Articles | Volume XLIII-B2-2021
Int. Arch. Photogramm. Remote Sens. Spatial Inf. Sci., XLIII-B2-2021, 139–144, 2021
https://doi.org/10.5194/isprs-archives-XLIII-B2-2021-139-2021
Int. Arch. Photogramm. Remote Sens. Spatial Inf. Sci., XLIII-B2-2021, 139–144, 2021
https://doi.org/10.5194/isprs-archives-XLIII-B2-2021-139-2021

  28 Jun 2021

28 Jun 2021

GENERATING SYNTHETIC 3D POINT SEGMENTS FOR IMPROVED CLASSIFICATION OF MOBILE LIDAR POINT CLOUDS

S. A. Chitnis, Z. Huang, and K. Khoshelham S. A. Chitnis et al.
  • University of Melbourne, Parkville, Victoria 3010, Australia

Keywords: Point Clouds, Synthetic Point Segments, Mobile Lidar, Adversarial Autoencoder, Classification

Abstract. Mobile lidar point clouds are commonly used for 3d mapping of road environments as they provide a rich, highly detailed geometric representation of objects on and around the road. However, raw lidar point clouds lack semantic information about the type of objects, which is necessary for various applications. Existing methods for the classification of objects in mobile lidar data, including state of the art deep learning methods, achieve relatively low accuracies, and a primary reason for this under-performance is the inadequacy of available 3d training samples to sufficiently train deep networks. In this paper, we propose a generative model for creating synthetic 3d point segments that can aid in improving the classification performance of mobile lidar point clouds. We train a 3d Adversarial Autoencoder (3dAAE) to generate synthetic point segments that exhibit a high resemblance to and share similar geometric features with real point segments. We evaluate the performance of a PointNet-like classifier trained with and without the synthetic point segments. The evaluation results support our hypothesis that training a classifier with training data augmented with synthetic samples leads to significant improvement in the classification performance. Specifically, our model achieves an F1 score of 0.94 for vehicles and pedestrians and 1.00 for traffic signs.