The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences
Download
Publications Copernicus
Download
Citation
Articles | Volume XLIII-B1-2020
Int. Arch. Photogramm. Remote Sens. Spatial Inf. Sci., XLIII-B1-2020, 573–578, 2020
https://doi.org/10.5194/isprs-archives-XLIII-B1-2020-573-2020
Int. Arch. Photogramm. Remote Sens. Spatial Inf. Sci., XLIII-B1-2020, 573–578, 2020
https://doi.org/10.5194/isprs-archives-XLIII-B1-2020-573-2020

  06 Aug 2020

06 Aug 2020

APPLICATION OF U-NET CONVOLUTIONAL NEURAL NETWORK TO BUSHFIRE MONITORING IN AUSTRALIA WITH SENTINEL-1/-2 DATA

I. K. Lee1, J. C. Trinder1, and A. Sowmya2 I. K. Lee et al.
  • 1The Surveying and Geospatial Engineering (SAGE) Research Group, School of Civil and Environmental Engineering, The University of New South Wales, Australia
  • 2School of Computer Science and Engineering, The University of New South Wales, Australia

Keywords: Bushfires, SAR, Sentinel-1/-2, Polarization, U-Net, Semantic segmentation, Deep learning, Data Cube

Abstract. This paper aims to define a pipeline architecture for near real-time identification of bushfire impact areas using Geoscience Australia Data Cube (AGDC). A series of catastrophic bushfires from late 2019 to early 2020 have captured international attention with their scale of devastation across four of the most populous states across Australia; New South Wales, Queensland, Victoria and South Australia. The extraction of burned areas using multispectral Sentinel-2 observations are straightforward when no cloud or haze obstruction are present. Without clear-sky observations, precisely locating the bushfire affected regions are difficult to achieve. Sentinel-1 C-band dual-polarized (VH/VV) Synthetic Aperture Radar (SAR) data is introduced to effectively elicit and analyse useful information based on backscattering coefficients, unaffected by adverse weather conditions and lack of sunlight. Burned vegetation results in significant volume scattering; co-/cross-polarised response decreases due to leafless trees, as well as coherence change over fire-disturbed areas; two sensors acquired images in a shortened revisit time over the same effected areas; all of which provided discriminative features for identifying burnt areas. Moreover, applying U-Net deep learning framework to train the recent and historical satellite data leads to an effective pre-trained segmentation model of burnt and non-burnt areas, enabling more timely emergency response, more efficient hazard reduction activities and evacuation planning during severe bushfire events. The advantages of this approach could have profound significance for a more robust, timely and accurate method of bushfire detection, utilising a scalable big data processing framework, to predict the bushfire footprint and fire spread model development.