Volume XLII-2
Int. Arch. Photogramm. Remote Sens. Spatial Inf. Sci., XLII-2, 1091-1096, 2018
https://doi.org/10.5194/isprs-archives-XLII-2-1091-2018
© Author(s) 2018. This work is distributed under
the Creative Commons Attribution 4.0 License.
Int. Arch. Photogramm. Remote Sens. Spatial Inf. Sci., XLII-2, 1091-1096, 2018
https://doi.org/10.5194/isprs-archives-XLII-2-1091-2018
© Author(s) 2018. This work is distributed under
the Creative Commons Attribution 4.0 License.

  30 May 2018

30 May 2018

FOREST COVER CLASSIFICATION USING GEOSPATIAL MULTIMODAL DATA

K. Suzuki, U. Rin, Y. Maeda, and H. Takeda K. Suzuki et al.
  • Dept. of R&D, KOKUSAI KOGYO CO., LTD., 2-24-1 Harumi-cho, Fuchu-shi, Tokyo, 183-0057, Japan

Keywords: Forest Cover Classification, LiDAR, Airborne Imagery, Convolutional Neural Network, Multimodal Learning

Abstract. To address climate change, accurate and automated forest cover monitoring is crucial. In this study, we propose a Convolutional Neural Network (CNN) which mimics professional interpreters’ manual techniques. Using simultaneously acquired airborne images and LiDAR data, we attempt to reproduce the 3D knowledge of tree shape, which interpreters potentially make use of. Geospatial features which support interpretation are also used as inputs to the CNN. Inspired by the interpreters’ techniques, we propose a unified approach that integrates these datasets in a shallow layer in the CNN network. With the proposed CNN, we show that the multi-modal CNN works robustly, which gets more than 80 % user’s accuracy. We also show that the 3D multi-modal approach is especially suited for deciduous trees thanks to the ability of capturing 3D shapes.