The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences
Download
Publications Copernicus
Download
Citation
Articles | Volume XLII-2/W13
https://doi.org/10.5194/isprs-archives-XLII-2-W13-393-2019
https://doi.org/10.5194/isprs-archives-XLII-2-W13-393-2019
04 Jun 2019
 | 04 Jun 2019

A NORMALIZED SURF FOR MULTISPECTRAL IMAGE MATCHING AND BAND CO-REGISTRATION

J. P. Jhan and J. Y. Rau

Keywords: Image Matching, Band Registration, Multispectral Camera, SURF

Abstract. Due to the raw images of multi-lens multispectral (MS) camera has significant misregistration errors, performing image registration for band co-registration is necessary. Image matching is an essential step for image registration, which obtains conjugate features on the overlapped areas, and use them to estimate the coefficients of a transformation model for correcting the geometrical errors. However, due to the none-linear intensity of spectral response, performing feature-based image matching (such as SURF) can only obtain only a few conjugate features on cross-band MS images. Different to SURF that extracts local extremum in a multi-scale space and utilizes a threshold to determine a feature, we proposed a normalized SURF (N-SURF) that extracts features on single scale, calculates the cumulative distribution function (CDF) of features, and obtains consistent features from the CDF. In this study, two datasets acquired from Tetracam MiniMCA-12 and Micasense RedEdge Altum are used for evaluating the matching performance of N-SURF. Results show that N-SURF can extract approximately 2–3 times number of features, match more points, and have more efficient than original SURF. On the other hand, with the successful of MS image matching, we can therefor use the conjugates to compute the coefficients of a geometric transformation model. In this study, three transformation models are used to compare the difference on MS band co-registration, i.e. affine, projective, and extended projective. Results show that extended projective model is better than the others as it can compensate the difference of lens distortion and viewpoint, and has co-registration accuracy of 0.3–0.6 pixels.