Volume XLI-B1
Int. Arch. Photogramm. Remote Sens. Spatial Inf. Sci., XLI-B1, 879-883, 2016
https://doi.org/10.5194/isprs-archives-XLI-B1-879-2016
© Author(s) 2016. This work is distributed under
the Creative Commons Attribution 3.0 License.
Int. Arch. Photogramm. Remote Sens. Spatial Inf. Sci., XLI-B1, 879-883, 2016
https://doi.org/10.5194/isprs-archives-XLI-B1-879-2016
© Author(s) 2016. This work is distributed under
the Creative Commons Attribution 3.0 License.

  06 Jun 2016

06 Jun 2016

ROBUST MOSAICKING OF UAV IMAGES WITH NARROW OVERLAPS

J. Kim1, T. Kim1, D. Shin2, and S. H. Kim2 J. Kim et al.
  • 1Dept. of Geoinformatic Engineering, Inha University, 100 Inharo, Namgu, Incheon, Korea
  • 2Agency for Defense Development, Yuseong, Daejeon, Korea

Keywords: Image Mosaicking, Geometric correction, UAV Images, Narrow overlaps

Abstract. This paper considers fast and robust mosaicking of UAV images under a circumstance that each UAV images have very narrow overlaps in-between. Image transformation for image mosaicking consists of two estimations: relative transformations and global transformations. For estimating relative transformations between adjacent images, projective transformation is widely considered. For estimating global transformations, panoramic constraint is widely used. While perspective transformation is a general transformation model in 2D-2D transformation, this may not be optimal with weak stereo geometry such as images with narrow overlaps. While panoramic constraint works for reliable conversion of global transformation for panoramic image generation, this constraint is not applicable to UAV images in linear motions. For these reasons, a robust approach is investigated to generate a high quality mosaicked image from narrowly overlapped UAV images. For relative transformations, several transformation models were considered to ensure robust estimation of relative transformation relationship. Among them were perspective transformation, affine transformation, coplanar relative orientation, and relative orientation with reduced adjustment parameters. Performance evaluation for each transformation model was carried out. The experiment results showed that affine transformation and adjusted coplanar relative orientation were superior to others in terms of stability and accuracy. For global transformation, we set initial approximation by converting each relative transformation to a common transformation with respect to a reference image. In future work, we will investigate constrained relative orientation for enhancing geometric accuracy of image mosaicking and bundle adjustments of each relative transformation model for optimal global transformation.