The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences
Download
Publications Copernicus
Download
Citation
Articles | Volume XLII-2/W13
https://doi.org/10.5194/isprs-archives-XLII-2-W13-35-2019
https://doi.org/10.5194/isprs-archives-XLII-2-W13-35-2019
04 Jun 2019
 | 04 Jun 2019

SEMANTIC SEGMENTATION OF BUILDING IN AIRBORNE IMAGES

S. Huang, F. Nex, Y. Lin, and M. Y. Yang

Keywords: Buildings, Semantic Segmentation, Deep learning, 3D features

Abstract. Building is a key component to the reconstructing of LoD3 city modelling. Compared to terrestrial view, airborne datasets have more occlusions at street level but can cover larger area in the urban areas. With the popularity of the Deep Learning, many tasks in the field of computer vision can be solved in easier and efficiency way. In this paper, we propose a method to apply deep neural networks to building façade segmentation. In particular, the FC-DenseNet and the DeepLabV3+ algorithms are used to segment the building from airborne images and get semantic information such as, wall, roof, balcony and opening area. The patch-wise segmentation is used in the training and testing process in order to get information at pixel level. Different typologies of input have been considered: beside the conventional 2D information (i.e. RGB image), we combined 2D information with 3D features extracted from dense image matching point clouds to improve the performance of the segmentation. Results show that FC-DenseNet trained with 2D and 3D features achieves the best result, IoU up to 64.41%, it increases 5.13% compared to the result of the same model trained without 3D features.