MULTI-MODAL REMOTE SENSING DATA FUSION FRAMEWORK
- 1School of Environmental and Geographical Sciences, University of Nottingham, Malaysia campus, Malaysia
- 2School of Computer Science, University of Nottingham, Malaysia campus, Malaysia
- 3Faculty of Science and Engineering, Hoa Sen University, Vietnam
Keywords: Deep Learning, Convolutional Neural Networks, Super resolution, Crowd-sourced data, Data fusion
Abstract. The inconsistency between the freely available remote sensing datasets and crowd-sourced data from the resolution perspective forms a big challenge in the context of data fusion. In classical classification problems, crowd-sourced data are represented as points that may or not be located within the same pixel. This discrepancy can result in having mixed pixels that could be unjustly classified. Moreover, it leads to failure in retaining sufficient level of details from data inferences. In this paper we propose a method that can preserve detailed inferences from remote sensing datasets accompanied with crowd-sourced data. We show that advanced machine learning techniques can be utilized towards this objective. The proposed method relies on two steps, firstly we enhance the spatial resolution of the satellite image using Convolutional Neural Networks and secondly we fuse the crowd-sourced data with the upscaled version of the satellite image. However, the covered scope in this paper is concerning the first step. Results show that CNN can enhance Landsat 8 scenes resolution visually and quantitatively.