The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences
Publications Copernicus
Download
Citation
Articles | Volume XLIII-B2-2020
https://doi.org/10.5194/isprs-archives-XLIII-B2-2020-737-2020
https://doi.org/10.5194/isprs-archives-XLIII-B2-2020-737-2020
12 Aug 2020
 | 12 Aug 2020

EFFICIENT COMPUTATION OF POSTERIOR COVARIANCE IN BUNDLE ADJUSTMENT IN DBAT FOR PROJECTS WITH LARGE NUMBER OF OBJECT POINTS

N. Börlin, A. Murtiyoso, and P. Grussenmeyer

Keywords: Bundle adjustment, Quality control, Posterior covariance, Software, Photogrammetry

Abstract. One of the major quality control parameters in bundle adjustment are the posterior estimates of the covariance of the estimated parameters. Posterior covariance computations have been part of the open source Damped Bundle Adjustment Toolbox in Matlab (DBAT) since its first public release. However, for large projects, the computation of especially the posterior covariances of object points have been time consuming.

The non-zero structure of the normal matrix depends on the ordering of the parameters to be estimated. For some algorithms, the ordering of the parameters highly affect the computational effort needed to compute the results. If the parameters are ordered to have the object points first, the non-zero structure of the normal matrix forms an arrowhead.

In this paper, the legacy DBAT posterior computation algorithm was compared to three other algorithms: The Classic algorithm based on the reduced normal equation, the Sparse Inverse algorithm by Takahashi, and the novel Inverse Cholesky algorithm. The Inverse Cholesky algorithm computes the explicit inverse of the Cholesky factor of the normal matrix in arrowhead ordering.

The algorithms were applied to normal matrices of ten data sets of different types and sizes. The project sizes ranged from 21 images and 100 object points to over 900 images and 400,000 object points. Both self-calibration and non-self-calibration cases were investigated. The results suggest that the Inverse Cholesky algorithm is the fastest for projects up to about 300 images. For larger projects, the Classic algorithm is faster. Compared to the legacy DBAT implementation, the Inverse Cholesky algorithm provides a performance increase by one to two orders of magnitude. The largest data set was processed in about three minutes on a five year old workstation.

The legacy and Inverse Cholesky algorithms were implemented in Matlab. The Classic and Sparse Inverse algorithms included code written in C. For a general toolbox as DBAT, a pure Matlab implementation is advantageous, as it removes any dependencies on, e.g., compilers. However, for a specific lab with mostly large projects, compiling and using the classic algorithm will most likely give the best performance. Nevertheless, the Inverse Cholesky algorithm is a significant addition to DBAT as it enables a relatively rapid computation of more statistical metrics, further reinforcing its application for reprocessing bundle adjustment results of black-box solutions.