BUILDING CLASSIFICATION OF VHR AIRBORNE STEREO IMAGES USING FULLY CONVOLUTIONAL NETWORKS AND FREE TRAINING SAMPLES
- 1Dept. Computer Science, Delft University of Technology, The Netherlands
- 2Dept. Urbanism, Delft University of Technology, The Netherlands
- 3Dept. of Geoscience and Remote Sensing, Delft University of Technology, The Netherlands
- 4Dept. OTB, Delft University of Technology, The Netherlands
Keywords: Building Classification, VHR Airborne Stereo Images, FCN, Base Map, Mislabels, Free Training Samples, Fine Tuning, Atrous Convolution
Abstract. Semantic segmentation, especially for buildings, from the very high resolution (VHR) airborne images is an important task in urban mapping applications. Nowadays, the deep learning has significantly improved and applied in computer vision applications. Fully Convolutional Networks (FCN) is one of the tops voted method due to their good performance and high computational efficiency. However, the state-of-art results of deep nets depend on the training on large-scale benchmark datasets. Unfortunately, the benchmarks of VHR images are limited and have less generalization capability to another area of interest. As existing high precision base maps are easily available and objects are not changed dramatically in an urban area, the map information can be used to label images for training samples. Apart from object changes between maps and images due to time differences, the maps often cannot perfectly match with images. In this study, the main mislabeling sources are considered and addressed by utilizing stereo images, such as relief displacement, different representation between the base map and the image, and occlusion areas in the image. These free training samples are then fed to a pre-trained FCN. To find the better result, we applied fine-tuning with different learning rates and freezing different layers. We further improved the results by introducing atrous convolution. By using free training samples, we achieve a promising building classification with 85.6 % overall accuracy and 83.77 % F1 score, while the result from ISPRS benchmark by using manual labels has 92.02 % overall accuracy and 84.06 % F1 score, due to the building complexities in our study area.