The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences
Publications Copernicus
Articles | Volume XLII-2/W13
Int. Arch. Photogramm. Remote Sens. Spatial Inf. Sci., XLII-2/W13, 1001–1006, 2019
Int. Arch. Photogramm. Remote Sens. Spatial Inf. Sci., XLII-2/W13, 1001–1006, 2019

  05 Jun 2019

05 Jun 2019


H. He, K. Khoshelham, and C. Fraser H. He et al.
  • Geomatics Group, Dept. of Infrastructure Engineering, University of Melbourne, Australia

Keywords: Vox-Net, SAMME, Object Recognition, Point Cloud, 3DCNN, Deep Learning, Transfer Learning

Abstract. The classification of mobile Lidar data is challenged by the complexity of objects in the point clouds and the limited number of available training samples. Incomplete shape, noise points and uneven point density make the extraction of features from point clouds relatively arduous. Additionally, the difference in point density, and size and shape of objects, restricts the utilization of labelled samples from other sources. To solve this problem, we explore the possibility of improving the classification performance of a state-of-the-art deep learning method, Vox-Net, by using auxiliary training samples from a different dataset. We compare the performance of Vox-Net trained with and without the auxiliary dataset. The comparison shows that more instances can be recognized in classes with auxiliary data. At the same time, the performance in classes without complementary data can deteriorate due to the low number of samples in these categories. To achieve a balance in the performance for different categories, we further replace the classification layer of Vox-Net with AdaBoost. The AdaBoost classification displays good recognition ability in classes with few instances but decreases the overall accuracy.