The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences
Download
Publications Copernicus
Download
Citation
Articles | Volume XLIII-B3-2021
Int. Arch. Photogramm. Remote Sens. Spatial Inf. Sci., XLIII-B3-2021, 273–278, 2021
https://doi.org/10.5194/isprs-archives-XLIII-B3-2021-273-2021
Int. Arch. Photogramm. Remote Sens. Spatial Inf. Sci., XLIII-B3-2021, 273–278, 2021
https://doi.org/10.5194/isprs-archives-XLIII-B3-2021-273-2021

  28 Jun 2021

28 Jun 2021

HIGH-RESOLUTION URBAN MAPPING BY FUSION OF SAR AND OPTICAL DATA

N. Petrushevsky, M. Manzoni, and A. M. Guarnieri N. Petrushevsky et al.
  • Politecnico di Milano, Milan, Italy

Keywords: Synthetic Aperture Radar, Optical, Land Cover Classification, Segmentation, Machine Learning

Abstract. Mapping the exact extent of urban areas is a critical prerequisite in many remote sensing applications, such as hazard evaluation and change detection. The usage of Synthetical Aperture Radar (SAR) data has gained popularity due to the unique characteristics of the backscattered radio signal from human-made targets. The Sentinel-1 (S1) constellation, with a global revisit time of 6–12 days in Interferometric Wide Swath (IW) mode and free and open access to the data, allows the development of new applications to monitor urban sites. However, S1 is rarely considered when fine resolution is required due to the large pixel size and the need for spatial averaging to obtain robust estimators. We propose a method to improve Sentinel-1 urban classification performance by exploiting one Multi-Spectral (MS) image acquired by Sentinel-2 (S2). MS data is used for tracing the precise natural boundaries in a scene through superpixels segmentation. A machine learning approach is then applied to interpret the thematic context of each segment from short temporal stacks of coregistered SAR data. We use a short sensing period (around two months), so rapid changes can be traces. The proposed fusion of S1 and S2 data was tested in the area of Milan (Italy), with a total accuracy of about 90%. The ability to follow high-resolution details in a mixed environment is demonstrated, opening the possibility of efficiently tracing the human footprint.