Volume XLII-1 | Copyright
Int. Arch. Photogramm. Remote Sens. Spatial Inf. Sci., XLII-1, 347-354, 2018
https://doi.org/10.5194/isprs-archives-XLII-1-347-2018
© Author(s) 2018. This work is distributed under
the Creative Commons Attribution 4.0 License.

  26 Sep 2018

26 Sep 2018

EXPLORING ALS AND DIM DATA FOR SEMANTIC SEGMENTATION USING CNNS

F. Politz and M. Sester F. Politz and M. Sester
  • Institute of Cartography and Geoinformatics, Leibniz University Hannover, Germany

Keywords: Airborne Laser Scanning, Dense Image Matching, CNN, encoder-decoder Network, semantic segmentation, point cloud

Abstract. Over the past years, the algorithms for dense image matching (DIM) to obtain point clouds from aerial images improved significantly. Consequently, DIM point clouds are now a good alternative to the established Airborne Laser Scanning (ALS) point clouds for remote sensing applications. In order to derive high-level applications such as digital terrain models or city models, each point within a point cloud must be assigned a class label. Usually, ALS and DIM are labelled with different classifiers due to their varying characteristics. In this work, we explore both point cloud types in a fully convolutional encoder-decoder network, which learns to classify ALS as well as DIM point clouds. As input, we project the point clouds onto a 2D image raster plane and calculate the minimal, average and maximal height values for each raster cell. The network then differentiates between the classes ground, non-ground, building and no data. We test our network in six training setups using only one point cloud type, both point clouds as well as several transfer-learning approaches. We quantitatively and qualitatively compare all results and discuss the advantages and disadvantages of all setups. The best network achieves an overall accuracy of 96% in an ALS and 83% in a DIM test set.

Download & links