The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences
Download
Publications Copernicus
Download
Citation
Articles | Volume XLII-2/W17
Int. Arch. Photogramm. Remote Sens. Spatial Inf. Sci., XLII-2/W17, 315–322, 2019
https://doi.org/10.5194/isprs-archives-XLII-2-W17-315-2019
Int. Arch. Photogramm. Remote Sens. Spatial Inf. Sci., XLII-2/W17, 315–322, 2019
https://doi.org/10.5194/isprs-archives-XLII-2-W17-315-2019

  29 Nov 2019

29 Nov 2019

SEAMLESS CO-REGISTRATION OF IMAGES FROM MULTI-SENSOR MULTISPECTRAL CAMERAS

M. Shahbazi1 and C. Cortes2 M. Shahbazi and C. Cortes
  • 1Centre de géomatique du Québec, Québec, Canada
  • 2Dept. of Geomatic Engineering, University of Calgary, Alberta, Canada

Keywords: multispectral images, fisheye geometry, data fusion, camera calibration, trifocal tensor, co-registration

Abstract. Small-format, consumer-grade multi-camera multispectral systems have gained popularity in recent years. This is specifically due to the simplicity of their integration onboard platforms with limited payload capacity, such as Unmanned Aerial Vehicles (UAVs). Commercially available photogrammetric software can process the image data collected by these cameras to create multispectral ortho-rectified mosaics. However, misalignments of several pixels between spectral bands have been observed to be a common issue when employing these solutions, which can undermine the spectral and geometric integrity of the data. Besides, in advanced processing workflows such as object detection and classification with deep learning algorithms, band-to-band co-registered images are needed rather than one mosaic. We propose a two-fold solution for seamless band-to-band registration of images captured by five cameras integrated into a miniature multispectral camera system. This approach consists of 1) a robust self-calibration of the multispectral camera system to accurately estimate the intrinsic calibration parameters and relative orientation parameters of all cameras; 2) a single capture, band-to-band co-registration method based on trifocal constraints. This approach differs from existing literature since it is fully automatic, does not make any assumptions about the scene, does not use any best-fit projective or similarity transformations, and does not attempt cross-spectral feature-point matching. Our experiments confirm that the proposed co-registration method can accurately fuse multispectral images from a miniature multi-camera system and is invariant to large depth-variations in the captured scene.