Int. Arch. Photogramm. Remote Sens. Spatial Inf. Sci., XLI-B7, 405-410, 2016
http://www.int-arch-photogramm-remote-sens-spatial-inf-sci.net/XLI-B7/405/2016/
doi:10.5194/isprs-archives-XLI-B7-405-2016
 
21 Jun 2016
CLASSIFICATION OF URBAN AERIAL DATA BASED ON PIXEL LABELLING WITH DEEP CONVOLUTIONAL NEURAL NETWORKS AND LOGISTIC REGRESSION
W. Yao, P. Poleswki, and P. Krzystek Munich University of Applied sciences, 80333 Munich, Germany
Keywords: aerial data, urban areas, evidence combination, object classification, deep feature learning Abstract. The recent success of deep convolutional neural networks (CNN) on a large number of applications can be attributed to large amounts of available training data and increasing computing power. In this paper, a semantic pixel labelling scheme for urban areas using multi-resolution CNN and hand-crafted spatial-spectral features of airborne remotely sensed data is presented. Both CNN and hand-crafted features are applied to image/DSM patches to produce per-pixel class probabilities with a L1-norm regularized logistical regression classifier. The evidence theory infers a degree of belief for pixel labelling from different sources to smooth regions by handling the conflicts present in the both classifiers while reducing the uncertainty. The aerial data used in this study were provided by ISPRS as benchmark datasets for 2D semantic labelling tasks in urban areas, which consists of two data sources from LiDAR and color infrared camera. The test sites are parts of a city in Germany which is assumed to consist of typical object classes including impervious surfaces, trees, buildings, low vegetation, vehicles and clutter. The evaluation is based on the computation of pixel-based confusion matrices by random sampling. The performance of the strategy with respect to scene characteristics and method combination strategies is analyzed and discussed. The competitive classification accuracy could be not only explained by the nature of input data sources: e.g. the above-ground height of nDSM highlight the vertical dimension of houses, trees even cars and the nearinfrared spectrum indicates vegetation, but also attributed to decision-level fusion of CNN’s texture-based approach with multichannel spatial-spectral hand-crafted features based on the evidence combination theory.
Conference paper (PDF, 1386 KB)


Citation: Yao, W., Poleswki, P., and Krzystek, P.: CLASSIFICATION OF URBAN AERIAL DATA BASED ON PIXEL LABELLING WITH DEEP CONVOLUTIONAL NEURAL NETWORKS AND LOGISTIC REGRESSION, Int. Arch. Photogramm. Remote Sens. Spatial Inf. Sci., XLI-B7, 405-410, doi:10.5194/isprs-archives-XLI-B7-405-2016, 2016.

BibTeX EndNote Reference Manager XML