The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences
Download
Publications Copernicus
Download
Citation
Articles | Volume XLIII-B2-2020
https://doi.org/10.5194/isprs-archives-XLIII-B2-2020-1513-2020
https://doi.org/10.5194/isprs-archives-XLIII-B2-2020-1513-2020
14 Aug 2020
 | 14 Aug 2020

AN EXPLAINABLE CONVOLUTIONAL AUTOENCODER MODEL FOR UNSUPERVISED CHANGE DETECTION

L. Bergamasco, S. Saha, F. Bovolo, and L. Bruzzone

Keywords: Multi-temporal Analysis, Change Detection, Deep Learning, Transfer Learning, Autoencoder, Explainable Artificial Intelligence

Abstract. Transfer learning methods reuse a deep learning model developed for a task on another task. Such methods have been remarkably successful in a wide range of image processing applications. Following the trend, few transfer learning based methods have been proposed for unsupervised multi-temporal image analysis and change detection (CD). Inspite of their success, the transfer learning based CD methods suffer from limited explainability. In this paper, we propose an explainable convolutional autoencoder model for CD. The model is trained in: 1) an unsupervised way using, as the bi-temporal images, patches extracted from the same geographic location; 2) a greedy fashion, one encoder and decoder layer pair at a time. A number of features relevant for CD is chosen from the encoder layer. To build an explainable model, only selected features from the encoder layer is retained and the rest is discarded. Following this, another encoder and decoder layer pair is added to the model in similar fashion until convergence. We further visualize the features to better interpret the learned features. We validated the proposed method on a Landsat-8 dataset obtained in Spain. Using a set of experiments, we demonstrate the explainability and effectiveness of the proposed model.