Volume XXXIX-B3
Int. Arch. Photogramm. Remote Sens. Spatial Inf. Sci., XXXIX-B3, 69-74, 2012
https://doi.org/10.5194/isprsarchives-XXXIX-B3-69-2012
© Author(s) 2012. This work is distributed under
the Creative Commons Attribution 3.0 License.
Int. Arch. Photogramm. Remote Sens. Spatial Inf. Sci., XXXIX-B3, 69-74, 2012
https://doi.org/10.5194/isprsarchives-XXXIX-B3-69-2012
© Author(s) 2012. This work is distributed under
the Creative Commons Attribution 3.0 License.

  23 Jul 2012

23 Jul 2012

SIFT FOR DENSE POINT CLOUD MATCHING AND AERO TRIANGULATION

J. R. Tsay and M. S. Lee J. R. Tsay and M. S. Lee
  • Dept. of Geomatics, National Cheng Kung University, 70101 Tainan, Taiwan

Keywords: SIFT, Dense Matching, Quality Filtering (QF), Aerotriangulation, Point Cloud

Abstract. This paper presents a new method for dense point cloud matching and aero triangulation based on the well-known scale invariant feature transform (SIFT) technique. The modern digital cameras can take high resolution aerial images with high end lap between contiguous images in a strip and, if needed, also with high side lap between images on neighboring strips. Therefore, automation on image matching for generation of high density of 3D object points becomes applicable. A new method is thus developed to perform the processing. Moreover, it can do an aero triangulation and automatic tie point measurement without the need on the input data such as block and strip data for providing image overlap information. In order to increase the effectiveness of the method for simultaneously processing a large number of aerial images with large image format in a block area, both schemes of Quality Filtering (QF) and Affine Transformation Prediction (AFTP) are proposed for automatic tie point extraction and measurement with a better and satisfactory efficiency. Tests are done by using aerial images taken with the RMK DX camera in Taiwan. Also, high precision ground check points are adopted to evaluate the quality of the results. They show that a high density of 3D object points are extracted and determined. Furthermore, the automatic tie point selection and measurement is done efficiently even under the circumstance that no priori-knowledge on image overlap is available. Also, ground check points show that the accuracy of photo coordinates is 0.21 pixels, namely it reaches a subpixel level.