The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences
Download
Publications Copernicus
Download
Citation
Articles | Volume XLIII-B2-2020
Int. Arch. Photogramm. Remote Sens. Spatial Inf. Sci., XLIII-B2-2020, 361–368, 2020
https://doi.org/10.5194/isprs-archives-XLIII-B2-2020-361-2020
Int. Arch. Photogramm. Remote Sens. Spatial Inf. Sci., XLIII-B2-2020, 361–368, 2020
https://doi.org/10.5194/isprs-archives-XLIII-B2-2020-361-2020

  12 Aug 2020

12 Aug 2020

COMPUTED TOMOGRAPHY DATA COLOURING BASED ON PHOTOGRAMMETRIC IMAGES

K. Zhan1, Y. Song2, D. Fritsch3, G. Mammadov4, and J. Wagner1 K. Zhan et al.
  • 1Adaptive Structures in Aerospace Engineering, University of Stuttgart, Germany
  • 2Oceanic Machine Vision Group, GEOMAR Helmholtz Centre for Ocean Research Kiel, Germany
  • 3Institute for Photogrammetry, University of Stuttgart, Germany
  • 4Institute for Parallel and Distributed Systems, University of Stuttgart, Germany

Keywords: Computed Tomography, photogrammetry, point cloud colouring, data fusion, surface matching

Abstract. Nowadays various methods and sensors are available for 3D reconstruction tasks; however, it is still necessary to integrate advantages of different technologies for optimizing the quality 3D models. Computed tomography (CT) is an imaging technique which takes a large number of radiographic measurements from different angles, in order to generate slices of the object, however, without colour information. The aim of this study is to put forward a framework to extract colour information from photogrammetric images for corresponding Computed Tomography (CT) surface data with high precision. The 3D models of the same object from CT and photogrammetry methods are generated respectively, and a transformation matrix is determined to align the extracted CT surface to the photogrammetric point cloud through a coarse-to-fine registration process. The estimated pose information of images to the photogrammetric point clouds, which can be obtained from the standard image alignment procedure, also applies to the aligned CT surface data. For each camera pose, a depth image of CT data is calculated by projecting all the CT points to the image plane. The depth image is in principle should agree with the corresponding photogrammetric image. The points, which cannot be seen from the pose, but are also projected on the depth image, are excluded from the colouring process. This is realized by comparing the range values of neighbouring pixels and finding the corresponding 3D points with larger range values. The same procedure is implemented for all the image poses to obtain the coloured CT surface. Thus, by using photogrammetric images, we achieve a coloured CT dataset with high precision, which combines the advantages from both methods. Rather than simply stitching different data, we deep-dive into the photogrammetric 3D reconstruction process and optimize the CT data with colour information. This process can also provide an initial route and more options for other data fusion processes.