Volume XLII-2
Int. Arch. Photogramm. Remote Sens. Spatial Inf. Sci., XLII-2, 1045-1051, 2018
https://doi.org/10.5194/isprs-archives-XLII-2-1045-2018
© Author(s) 2018. This work is distributed under
the Creative Commons Attribution 4.0 License.
Int. Arch. Photogramm. Remote Sens. Spatial Inf. Sci., XLII-2, 1045-1051, 2018
https://doi.org/10.5194/isprs-archives-XLII-2-1045-2018
© Author(s) 2018. This work is distributed under
the Creative Commons Attribution 4.0 License.

  30 May 2018

30 May 2018

COLORIZING SENTINEL-1 SAR IMAGES USING A VARIATIONAL AUTOENCODER CONDITIONED ON SENTINEL-2 IMAGERY

M. Schmitt1, L. H. Hughes1, M. Körner2, and X. X. Zhu1,3 M. Schmitt et al.
  • 1Signal Processing in Earth Observation, Technical University of Munich (TUM), Munich, Germany
  • 2Chair of Remote Sensing Technology, Technical University of Munich (TUM), Munich, Germany
  • 3Remote Sensing Technology Institute (IMF), German Aerospace Center (DLR), Oberpfaffenhofen, Germany

Keywords: Synthetic aperture radar (SAR), optical remote sensing, Sentinel-1, Sentinel-2, deep learnig, data fusion

Abstract. In this paper, we have shown an approach for the automatic colorization of SAR backscatter images, which are usually provided in the form of single-channel gray-scale imagery. Using a deep generative model proposed for the purpose of photograph colorization and a Lab-space-based SAR-optical image fusion formulation, we are able to predict artificial color SAR images, which disclose much more information to the human interpreter than the original SAR data. Future work will aim at further adaption of the employed procedure to our special case of multi-sensor remote sensing imagery. Furthermore, we will investigate if the low-level representations learned intrinsically by the deep network can be used for SAR image interpretation in an end-to-end manner.