DEEP-LEARNING-BASED POINT CLOUD UPSAMPLING OF NATURAL ENTITIES AND SCENES
Keywords: Geometric deep learning, Point cloud upsampling, Graph convolutional networks, Natural scenes
Abstract. Limited accessibility, occlusions, or sensor placement, can generate unevenly sampled laser scanning based point clouds. Such uneven coverage and partial lack of detail can affect the computation of geometric features therein and generate a visually unpleasant site description. The application of 3-D interpolation-driven solutions has been demonstrated to generate oversmoothed results as such algorithms ignore local patterns and variations within the surface. In that respect, the introduction of deep neural networks (DNN) has the potential to learn more complicated forms, typical of the rich morphological patterns that natural landforms and entities therein tend to exhibit. While existing research has focused on the upsampling of man-made objects, little has been devoted to natural scenes and the entities therein. To address that, we propose in this paper a DNN based approach that utilizes the self-similarity of geometric details as a means to address this generally ill-posed problem. Specifically, we treat two key elements that stand at the root of point-DNN-related design, the definition and selection of neighboring points, and the interpolation at a high dimensional feature space. We show how the introduction of a graph convolutional network and an attention unit helps address these matters and demonstrate how knowledge of densely sampled regions can be learned and transferred to sparsely sampled ones through geometric learning methods.