Volume XLII-3
Int. Arch. Photogramm. Remote Sens. Spatial Inf. Sci., XLII-3, 79-85, 2018
https://doi.org/10.5194/isprs-archives-XLII-3-79-2018
© Author(s) 2018. This work is distributed under
the Creative Commons Attribution 4.0 License.
Int. Arch. Photogramm. Remote Sens. Spatial Inf. Sci., XLII-3, 79-85, 2018
https://doi.org/10.5194/isprs-archives-XLII-3-79-2018
© Author(s) 2018. This work is distributed under
the Creative Commons Attribution 4.0 License.

  30 Apr 2018

30 Apr 2018

EXTRACTION OF BUILT-UP AREAS USING CONVOLUTIONAL NEURAL NETWORKS AND TRANSFER LEARNING FROM SENTINEL-2 SATELLITE IMAGES

V. S. Bramhe, S. K. Ghosh, and P. K. Garg V. S. Bramhe et al.
  • Geomatics Engineering Group, Civil Engineering Department, IIT Roorkee, 247667,India

Keywords: Built-up Area Extraction, Convolutional Neural Networks, Deep Learning, Sentinel-2 Images, Transfer Learning

Abstract. With rapid globalization, the extent of built-up areas is continuously increasing. Extraction of features for classifying built-up areas that are more robust and abstract is a leading research topic from past many years. Although, various studies have been carried out where spatial information along with spectral features has been utilized to enhance the accuracy of classification. Still, these feature extraction techniques require a large number of user-specific parameters and generally application specific. On the other hand, recently introduced Deep Learning (DL) techniques requires less number of parameters to represent more abstract aspects of the data without any manual effort. Since, it is difficult to acquire high-resolution datasets for applications that require large scale monitoring of areas. Therefore, in this study Sentinel-2 image has been used for built-up areas extraction. In this work, pre-trained Convolutional Neural Networks (ConvNets) i.e. Inception v3 and VGGNet are employed for transfer learning. Since these networks are trained on generic images of ImageNet dataset which are having very different characteristics from satellite images. Therefore, weights of networks are fine-tuned using data derived from Sentinel-2 images. To compare the accuracies with existing shallow networks, two state of art classifiers i.e. Gaussian Support Vector Machine (SVM) and Back-Propagation Neural Network (BP-NN) are also implemented. Both SVM and BP-NN gives 84.31 % and 82.86 % overall accuracies respectively. Inception-v3 and VGGNet gives 89.43 % of overall accuracy using fine-tuned VGGNet and 92.10 % when using Inception-v3. The results indicate high accuracy of proposed fine-tuned ConvNets on a 4-channel Sentinel-2 dataset for built-up area extraction.