Volume XLII-3
Int. Arch. Photogramm. Remote Sens. Spatial Inf. Sci., XLII-3, 2009-2015, 2018
https://doi.org/10.5194/isprs-archives-XLII-3-2009-2018
© Author(s) 2018. This work is distributed under
the Creative Commons Attribution 4.0 License.
Int. Arch. Photogramm. Remote Sens. Spatial Inf. Sci., XLII-3, 2009-2015, 2018
https://doi.org/10.5194/isprs-archives-XLII-3-2009-2018
© Author(s) 2018. This work is distributed under
the Creative Commons Attribution 4.0 License.

  30 Apr 2018

30 Apr 2018

GENERATION OF GROUND TRUTH DATASETS FOR THE ANALYSIS OF 3D POINT CLOUDS IN URBAN SCENES ACQUIRED VIA DIFFERENT SENSORS

Y. Xu1, Z. Sun1, R. Boerner1, T. Koch2, L. Hoegner1, and U. Stilla1 Y. Xu et al.
  • 1Photogrammetry and Remote Sensing, Technische Universität München, 80333 Munich, Germany
  • 2Remote Sensing Technology, Technische Universität München, 80333 Munich, Germany

Keywords: Different sensors, Point clouds, Multi-resolution voxel structure, 3D space labeling

Abstract. In this work, we report a novel way of generating ground truth dataset for analyzing point cloud from different sensors and the validation of algorithms. Instead of directly labeling large amount of 3D points requiring time consuming manual work, a multi-resolution 3D voxel grid for the testing site is generated. Then, with the help of a set of basic labeled points from the reference dataset, we can generate a 3D labeled space of the entire testing site with different resolutions. Specifically, an octree-based voxel structure is applied to voxelize the annotated reference point cloud, by which all the points are organized by 3D grids of multi-resolutions. When automatically annotating the new testing point clouds, a voting based approach is adopted to the labeled points within multiple resolution voxels, in order to assign a semantic label to the 3D space represented by the voxel. Lastly, robust line- and plane-based fast registration methods are developed for aligning point clouds obtained via various sensors. Benefiting from the labeled 3D spatial information, we can easily create new annotated 3D point clouds of different sensors of the same scene directly by considering the corresponding labels of 3D space the points located, which would be convenient for the validation and evaluation of algorithms related to point cloud interpretation and semantic segmentation.