The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences
Download
Publications Copernicus
Download
Citation
Articles | Volume XLIII-B3-2020
Int. Arch. Photogramm. Remote Sens. Spatial Inf. Sci., XLIII-B3-2020, 363–369, 2020
https://doi.org/10.5194/isprs-archives-XLIII-B3-2020-363-2020
Int. Arch. Photogramm. Remote Sens. Spatial Inf. Sci., XLIII-B3-2020, 363–369, 2020
https://doi.org/10.5194/isprs-archives-XLIII-B3-2020-363-2020

  21 Aug 2020

21 Aug 2020

STRATEGIC OPTIMIZATION OF CONVOLUTIONAL NEURAL NETWORKS FOR HYPERSPECTRAL LAND COVER CLASSIFICATION

C. Buehler1, F. Schenkel2, W. Gross2, G. Schaab1, and W. Middelmann2 C. Buehler et al.
  • 1Karlsruhe University of Applied Sciences Karlsruhe, Faculty for Information Management and Media, Germany
  • 2Fraunhofer IOSB, Ettlingen, Germany

Keywords: Hyperspectral Imagery, CNN, Transfer Learning, Classification, Spectral Feature Extraction, CNN Architecture Optimization

Abstract. Hyperspectral data recorded by future earth observation satellites will have up to hundreds of narrow bands that cover a wide range of the electromagnetic spectrum. The spatial resolution (around 30 meters) of such data, however, can impede the integration of the spatial domain for a classification due to spectrally mixed pixels and blurred edges in the data. Hence, the ability of performing a meaningful classification only relying on spectral information is important. In this study, a model for the spectral classification of hyperspectral data is derived by strategically optimizing a convolutional neural network (1D-CNN). The model is pre-trained and optimized on imagery of different nuts, beans, peas and dried fruits recorded with the Cubert ButterflEye X2 sensor. Subsequently, airborne hyperspectral datasets (Greding, Indian Pines and Pavia University) are used to evaluate the CNN's capability of transfer learning. For that, the datasets are classified with the pre-trained weights and, for comparison, with the same model architecture but trained from scratch with random weights. The results show substantial differences in classification accuracies (from 71.8% to 99.8% overall accuracy) throughout the used datasets, mainly caused by variations in the number of training samples, the spectral separability of the classes as well as the existence of mixed pixels for one dataset. For the dataset that is classified least accurately, the greatest improvement with pre-training is achieved (difference of 3.3% in overall accuracy compared to the non-pre-trained model). For the dataset that is classified with the highest accuracy, no significant transfer learning was observed.