The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences
Download
Publications Copernicus
Download
Citation
Articles | Volume XLIII-B2-2020
Int. Arch. Photogramm. Remote Sens. Spatial Inf. Sci., XLIII-B2-2020, 339–346, 2020
https://doi.org/10.5194/isprs-archives-XLIII-B2-2020-339-2020
Int. Arch. Photogramm. Remote Sens. Spatial Inf. Sci., XLIII-B2-2020, 339–346, 2020
https://doi.org/10.5194/isprs-archives-XLIII-B2-2020-339-2020

  12 Aug 2020

12 Aug 2020

TAILORED FEATURES FOR SEMANTIC SEGMENTATION WITH A DGCNN USING FREE TRAINING SAMPLES OF A COLORED AIRBORNE POINT CLOUD

E. Widyaningrum1,3, M. K. Fajari2,3, R. C. Lindenbergh1, and M. Hahn2 E. Widyaningrum et al.
  • 1Dept. of Geoscience and Remote Sensing, Delft University of Technology, The Netherlands
  • 2Photogrammetry and Geoinformatics, Faculty of Geomatics, Computer Science and Mathematics, Hochschule für Technik Stuttgart, Germany
  • 3Centre for Topographic Base Mapping and Toponyms, Geospatial Information Agency, Indonesia

Keywords: airborne point cloud, aerial photos, semantic segmentation, feature combinations, DGCNN

Abstract. Automation of 3D LiDAR point cloud processing is expected to increase the production rate of many applications including automatic map generation. Fast development on high-end hardware has boosted the expansion of deep learning research for 3D classification and segmentation. However, deep learning requires large amount of high quality training samples. The generation of training samples for accurate classification results, especially for airborne point cloud data, is still problematic. Moreover, which customized features should be used best for segmenting airborne point cloud data is still unclear. This paper proposes semi-automatic point cloud labelling and examines the potential of combining different tailor-made features for pointwise semantic segmentation of an airborne point cloud. We implement a Dynamic Graph CNN (DGCNN) approach to classify airborne point cloud data into four land cover classes: bare-land, trees, buildings and roads. The DGCNN architecture is chosen as this network relates two approaches, PointNet and graph CNNs, to exploit the geometric relationships between points. For experiments, we train an airborne point cloud and co-aligned orthophoto of the Surabaya city area of Indonesia to DGCNN using three different tailor-made feature combinations: points with RGB (Red, Green, Blue) color, points with original LiDAR features (Intensity, Return number, Number of returns) so-called IRN, and points with two spectral colors and Intensity (Red, Green, Intensity) so-called RGI. The overall accuracy of the testing area indicates that using RGB information gives the best segmentation results of 81.05% while IRN and RGI gives accuracy values of 76.13%, and 79.81%, respectively.