Volume XLII-4/W18
Int. Arch. Photogramm. Remote Sens. Spatial Inf. Sci., XLII-4/W18, 279–284, 2019
https://doi.org/10.5194/isprs-archives-XLII-4-W18-279-2019
© Author(s) 2019. This work is distributed under
the Creative Commons Attribution 4.0 License.
Int. Arch. Photogramm. Remote Sens. Spatial Inf. Sci., XLII-4/W18, 279–284, 2019
https://doi.org/10.5194/isprs-archives-XLII-4-W18-279-2019
© Author(s) 2019. This work is distributed under
the Creative Commons Attribution 4.0 License.

  18 Oct 2019

18 Oct 2019

CNN-BASED FEATURE-LEVEL FUSION OF VERY HIGH RESOLUTION AERIAL IMAGERY AND LIDAR DATA

S. Daneshtalab, H. Rastiveis, and B. Hosseiny S. Daneshtalab et al.
  • School of Surveying and Geospatial Engineering, College of Engineering, University of Tehran, Tehran, Iran

Keywords: Convolutional Neural Network (CNN), Feature Fusion, Deep Learning, Feature Extraction, Aerial Imagery, LiDAR

Abstract. Land-cover classification of Remote Sensing (RS) data in urban area has always been a challenging task due to the complicated relations between different objects. Recently, fusion of aerial imagery and light detection and ranging (LiDAR) data has obtained a great attention in RS communities. Meanwhile, convolutional neural network (CNN) has proven its power in extracting high-level (deep) descriptors to improve RS data classification. In this paper, a CNN-based feature-level framework is proposed to integrate LiDAR data and aerial imagery for object classification in urban area. In our method, after generating low-level descriptors and fusing them in a feature-level fusion by layer-stacking, the proposed framework employs a novel CNN to extract the spectral-spatial features for classification process, which is performed using a fully connected multilayer perceptron network (MLP). The experimental results revealed that the proposed deep fusion model provides about 10% improvement in overall accuracy (OA) in comparison with other conventional feature-level fusion techniques.