Volume XLII-2/W9
Int. Arch. Photogramm. Remote Sens. Spatial Inf. Sci., XLII-2/W9, 685-690, 2019
https://doi.org/10.5194/isprs-archives-XLII-2-W9-685-2019
© Author(s) 2019. This work is distributed under
the Creative Commons Attribution 4.0 License.
Int. Arch. Photogramm. Remote Sens. Spatial Inf. Sci., XLII-2/W9, 685-690, 2019
https://doi.org/10.5194/isprs-archives-XLII-2-W9-685-2019
© Author(s) 2019. This work is distributed under
the Creative Commons Attribution 4.0 License.

  31 Jan 2019

31 Jan 2019

SEMANTIC PHOTOGRAMMETRY – BOOSTING IMAGE-BASED 3D RECONSTRUCTION WITH SEMANTIC LABELING

E.-K. Stathopoulou and F. Remondino E.-K. Stathopoulou and F. Remondino
  • 3D Optical Metrology (3DOM) unit, Bruno Kessler Foundation (FBK), Trento, Italy

Keywords: image-based 3D reconstruction, label transfer, semantic photogrammetry, dense image matching

Abstract. Automatic semantic segmentation of images is becoming a very prominent research field with many promising and reliable solutions already available. Labelled images as input for the photogrammetric pipeline have enormous potential to improve the 3D reconstruction results. To support this argument, in this work we discuss the contribution of image semantic labelling towards image-based 3D reconstruction in photogrammetry. We experiment semantic information in various steps starting from feature matching to dense 3D reconstruction. Labelling in 2D is considered as an easier task in terms of data availability and algorithm maturity. However, since semantic labelling of all the images involved in the reconstruction may be a costly, laborious and time consuming task, we propose to use a deep learning architecture to automatically generate semantically segmented images. To this end, we have trained a Convolutional Neural Network (CNN) on historic building façade images that will be further enriched in the future. The first results of this study are promising, with an improved performance on the quality of the 3D reconstruction and the possibility to transfer the labelling results from 2D to 3D.

Please read the corrigendum first before accessing the conference paper.