The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences
Download
Publications Copernicus
Download
Citation
Articles | Volume XLII-2/W13
Int. Arch. Photogramm. Remote Sens. Spatial Inf. Sci., XLII-2/W13, 581–588, 2019
https://doi.org/10.5194/isprs-archives-XLII-2-W13-581-2019
Int. Arch. Photogramm. Remote Sens. Spatial Inf. Sci., XLII-2/W13, 581–588, 2019
https://doi.org/10.5194/isprs-archives-XLII-2-W13-581-2019

  04 Jun 2019

04 Jun 2019

AUTOMATIC CO-REGISTRATION OF AERIAL IMAGERY AND UNTEXTURED MODEL DATA UTILIZING AVERAGE SHADING GRADIENTS

S. Schmitz1, M. Weinmann2, and B. Ruf1,2 S. Schmitz et al.
  • 1Fraunhofer IOSB, Video Exploitation Systems, Karlsruhe, Germany
  • 2Institute of Photogrammetry and Remote Sensing, Karlsruhe Institute of Technology, Karlsruhe, Germany

Keywords: Co-registration, Pose estimation, 2D-3D Correspondence, Average Shading Gradients, Iterative Closest Point

Abstract. The comparison of current image data with existing 3D model data of a scene provides an efficient method to keep models up to date. In order to transfer information between 2D and 3D data, a preliminary co-registration is necessary. In this paper, we present a concept to automatically co-register aerial imagery and untextured 3D model data. To refine a given initial camera pose, our algorithm computes dense correspondence fields using SIFT flow between gradient representations of the model and camera image, from which 2D–3D correspondences are obtained. These correspondences are then used in an iterative optimization scheme to refine the initial camera pose by minimizing the reprojection error. Since it is assumed that the model does not contain texture information, our algorithm is built up on an existing method based on Average Shading Gradients (ASG) to generate gradient images based on raw geometry information only. We apply our algorithm for the co-registering of aerial photographs to an untextured, noisy mesh model. We have investigated different magnitudes of input error and show that the proposed approach can reduce the final reprojection error to a minimum of 1.27 ± 0.54 pixels, which is less than 10% of its initial value. Furthermore, our evaluation shows that our approach outperforms the accuracy of a standard Iterative Closest Point (ICP) implementation.