The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences
Download
Publications Copernicus
Download
Citation
Articles | Volume XLIII-B3-2021
Int. Arch. Photogramm. Remote Sens. Spatial Inf. Sci., XLIII-B3-2021, 823–828, 2021
https://doi.org/10.5194/isprs-archives-XLIII-B3-2021-823-2021
Int. Arch. Photogramm. Remote Sens. Spatial Inf. Sci., XLIII-B3-2021, 823–828, 2021
https://doi.org/10.5194/isprs-archives-XLIII-B3-2021-823-2021

  29 Jun 2021

29 Jun 2021

EVALUATION OF SAR TO OPTICAL IMAGE TRANSLATION USING CONDITIONAL GENERATIVE ADVERSARIAL NETWORK FOR CLOUD REMOVAL IN A CROP DATASET

L. E. Christovam1, M. H. Shimabukuro1,2, M. L. B. T. Galo1,3, and E. Honkavaara4 L. E. Christovam et al.
  • 1Graduate Program in Cartographic Sciences, São Paulo State University, Brazil
  • 2Dept. of Mathematics and Computer Science, São Paulo State University, Brazil
  • 3Dept. of Cartography, São Paulo State University, Brazil
  • 4Dept. of Remote Sensing and Photogrammetry, Finnish Geospatial Research Institute in National Land Survey of Finland, Finland

Keywords: cGAN, Sar-to-Optical, Remote Sensing, Image-to-Image, Image Translation, Pix2Pix, Synthetic Images, Sentinel-2

Abstract. Most methods developed to map crop fields with high-quality are based on optical image time-series. However, often accuracy of these approaches is deteriorated due to clouds and cloud shadows, which can decrease the availably of optical data required to represent crop phenological stages. In this sense, the objective of this study was to implement and evaluate the conditional Generative Adversarial Network (cGAN) that has been indicated as a potential tool to address the cloud and cloud shadow removal; we also compared it with the Witthaker Smother (WS), which is a well-known data cleaning algorithm. The dataset used to train and assess the methods was the Luis Eduardo Magalhães benchmark for tropical agricultural remote sensing applications. We selected one MSI/Sentinel-2 and C-SAR/Sentinel-1 image pair taken in days as close as possible. A total of 5000 image pair patches were generated to train the cGAN model, which was used to derive synthetic optical pixels for a testing area. Visual analysis, spectral behaviour comparison, and classification were used to evaluate and compare the pixels generated with the cGAN and WS against the pixel values from the real image. The cGAN provided consistent pixel values for most crop types in comparison to the real pixel values and outperformed the WS significantly. The results indicated that the cGAN has potential to fill cloud and cloud shadow gaps in optical image time-series.