AN UNSUPERVISED LABELING APPROACH FOR HYPERSPECTRAL IMAGE CLASSIFICATION
- Fraunhofer IOSB, Gutleuthausstr. 1, 76275 Ettlingen, Germany
Keywords: Hyperspectral Imaging, Segmentation, Superpixel, Hierarchical Clustering, Fuzzy C-Means, Convolutional Neural Networks
Abstract. The application of hyperspectral image analysis for land cover classification is mainly executed in presence of manually labeled data. The ground truth represents the distribution of the actual classes and it is mostly derived from field recorded information. Its manual generation is ineffective, tedious and very time-consuming. The continuously increasing amount of proprietary and publicly available datasets makes it imperative to reduce these related costs. In addition, adequately equipped computer systems are more capable of identifying patterns and neighbourhood relationships than a human operator. Based on these facts, an unsupervised labeling approach is presented to automatically generate labeled images used during the training of a convolutional neural network (CNN) classifier. The proposed method begins with the segmentation stage where an adapted version of the simple linear iterative clustering (SLIC) algorithm for dealing with hyperspectral data is used. Consequently, the Hierarchical Agglomerative Clustering (HAC) and Fuzzy C-Means (FCM) algorithms are employed to efficiently group similar superpixels considering distances with respect to each other. The distinct utilization of these clustering techniques defines a complementary stage for overcoming class overlapping during image generation. Ultimately, a CNN classifier is trained using the computed image to pixel-wise predict classes on unseen datasets. The labeling results, obtained using two hyperspectral benchmark datasets, indicate that the current approach is able to detect objects boundaries, automatically assign class labels to the entire dataset and to classify new data with a prediction certainty of 90%. Additionally, this method is also capable of achieving better classification accuracy and visual correspondence with reality than the ground truth images.