The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences
Download
Publications Copernicus
Download
Citation
Articles | Volume XLIII-B3-2021
Int. Arch. Photogramm. Remote Sens. Spatial Inf. Sci., XLIII-B3-2021, 243–249, 2021
https://doi.org/10.5194/isprs-archives-XLIII-B3-2021-243-2021
Int. Arch. Photogramm. Remote Sens. Spatial Inf. Sci., XLIII-B3-2021, 243–249, 2021
https://doi.org/10.5194/isprs-archives-XLIII-B3-2021-243-2021

  28 Jun 2021

28 Jun 2021

FUSING MULTI-MODAL DATA FOR SUPERVISED CHANGE DETECTION

P. Ebel1, S. Saha1, and X. X. Zhu1,2 P. Ebel et al.
  • 1Data Science in Earth Observation (SiPEO), Technical University of Munich (TUM), Munich, Germany
  • 2Remote Sensing Technology Institute (IMF), German Aerospace Center (DLR), Wessling, Germany

Keywords: change detection, multi-modal, fusion, synthetic aperture radar (SAR), optical, deep learning

Abstract. With the rapid development of remote sensing technology in the last decade, different modalities of remote sensing data recorded via a variety of sensors are now easily accessible. Different sensors often provide complementary information and thus a more detailed and accurate Earth observation is possible by integrating their joint information. While change detection methods have been traditionally proposed for homogeneous data, combining multi-sensor multi-temporal data with different characteristics and resolution may provide a more robust interpretation of spatio-temporal evolution. However, integration of multi-temporal information from disparate sensory sources is challenging. Moreover, research in this direction is often hindered by a lack of available multi-modal data sets. To resolve these current shortcomings we curate a novel data set for multi-modal change detection. We further propose a novel Siamese architecture for fusion of SAR and optical observations for multi-modal change detection, which underlines the value of our newly gathered data. An experimental validation on the aforementioned data set demonstrates the potentials of the proposed model, which outperforms common mono-modal methods compared against.