CALIBRATION OF A MULTI-CAMERA ROVER

Multi-Camera-Rover are recently coming up for usual terrestrical surveying tasks. This technique is new for the surveyors. Although photogrammetric specialists realize the benefits of such systems immediately, surveyors have difficulties to find efficient usages. To approach this new measurement systems the technique has to be understood and the confidence on the accuray has to grow. In this paper we analyze the accuracy of a Multi-Camera-Rover using an indoor testfield by photogrammetric algorithms. The results show that the knowledge of the interior orientation parameter of the cameras and the relative orientation of the cameras is essential for precise geometric reconstructions. Knowing these additional data, high accurate results become possible.


Motivation
Close-range photogrammetry has been a specialised working field in surveying and geoinformatics offering only loose connections with traditional terrestrial measurement techniques like tacheometry, height levelling and terrestrial laserscanning. Mostly the combination of both fields is done by tacheometric measured 3d pass points which can be identified in the digital images. Also, some automatic matching between photogrammetric images and terrestrial laserscanning data has been done. The major difficulty in combination of these different methods is the usage of solitary off-the-shelf digital cameras in the past. The modelling of a constant offset between the perspective center of the digital camera and e.g. several fixed exterior marks on the camera body is only seldom possible when a fixed focus camera or a fixed camera rig is used (e.g. in industrial applications).
With the new multi-camera rover (MCR) systems consisting of several hard mounted fixed focus cameras a close integration of close-range photogrammetry in tacheometric applications becomes easily possible when a tacheometer tracks the position of the camera rover or a GNSS receiver is mounted on-top of the MCR. Working with MCR is a new category of measurement techniques. Therefore new methods for calibration are necessary to ensure precise results in day-to-day applications. This paper focusses on three important accuracy aspects concerning the camera system of a MCR which are: i) the interior orientation of each mounted camera ii) the relative orientation of all mounted cameras and iii) the offset to the exterior target mark used for tracking and positioning of the MCR. These aspects have to be handled in a sound calibration. The calculation of the interior parameters of all cameras, the parameters of the relative orientation and the 3d-offset of the system relative to the exterior target is done by bundle-block adjustment.
The method will be presented and applied for a Trimble V10 MCR. All calibration measurements are performed in the 3dcalibration test field of the photogrammetric laboratory of the * Corresponding author FHWS which consists of 93 coded targets with known precise 3d coordinates. The results of all tests will be shown and compared to the data given by the manufacturer. Additional sensors included in the MCR (e.g. a tilt sensor or a magnetic sensor) are beyond the presented work.

Multi-camera rover
3d visualization is the main task of recently presented MCR. A few years ago Fujifilm brought to market two 3d cameras with two lense systems. It was a new early try to provide 3d image data and 3d-video data to a broader community. Multi-camera systems are often fixed on a ring. They differ in the direction of the cameras. In industrial application the viewing direction is to the inside (e.g. 3d scanning of persons (TEN24 MEDIA LTD, 2016)), in 3d animation and topographic applications to the outside.
Outside directed multi camera systems are the Trimble V10 (Trimble Navigation, 2016) and the GoPro Odyssey system (GoPro, 2015). The Trimble V10 system seems to be in a continous production. This system has twelve cameras at two height levels and vertical directions. The GoPro Odyssey systems seems to be only available in a small number. It consists of sixteen synchronized HERO4 Black cameras. (VideoStitch, 2016) announced recently a system of five EOS M3 Canon cameras with custom Samyang Fisheye lenses.
GoPro and VideoStitched obviously concentrate on the 3d-visualisation and 3d-gaming market. High geometric precisions are not necessary. Trimble V10 build a measuring instrument which has in some parts the capabilities of basic tacheometers. This measuring task makes a precise knowledge on the sensor system necessary, on the interior parameters of the cameras and the spatial calibration of the cameras in the system, in combination to other sensors. Two typical measuring situations are shown in fig. 1.
The V10 is at the moment a closed Trimble solution in the working process, because exact information on the sensor geometry is not easily accessible. In this article we try to gain some of this information by photogrammetric methods. In parallel we analyse the inherent system precision of the V10. The presented method Figure 2: Trimble V10 with a special adapter for tripods. Usually, in day-to-day applications the Trimble V10 is used with a plumb rod and a shock-absorbing tip.
can be transfered without large effort to the GoPro Odessy and the VideoStitched system.

Previous Work
The FHWS built up a photogrammetric test field for the estimation of the interior orientation parameters in 2010. The 3d coordinates of all pass points have been measured with a superior precision by tacheometric forward section. It has been constantly used for SLR-cameras, comsumer cameras and middle format cameras. (Hastedt and Brunn, 2011) showed its feasibility for the Fujifilm Real 3D W3. An introduction to the field of interior orientation of cameras gives (Luhmann et al., 2011).
In this article we focus on the Trimble V10 (s. fig. 2). Its advantage is the integration in the daily surveying environment of a field surveyor. The producer gives some data on the sensor (s. tab. 1). The pixel size results from the viewing field and the angle per pixel to s = 3.63mm tan(0.39mrad) = 1.4µm The overall size of the sensor plane is It is a 4 : 3 image format. A full frame camera with a width of 36mm would have a 35mm lens (wide angle lens) with a pixel size of 13µm. NIKON D800 have 4.9µm pixel size.
(Köhler, 2014), (Shams, 2016) and (Whitehead, 2014) give background information how Trimble does the calibration of the cameras (with a tilting plane and collimator) and the determination of the interior orientation (checkerboard pattern room). Trimble itself calculates the principle point, the camera constant, the radial symmetric distortion up to 7th order and the non-symmetric distortion; magnitude and direction. All resulting residuals in image coordinates are below one pixel.
The V10 user manual (Trimble, 2016) names an accuracy of the V10 of 0.4mm at a distance of 1m, 4mm in 10m and 40mm in 100m.

Outline of the article
The main topic of this article is, how the V10 can be used as a MCR for different software packages than the Trimble Business Center. In this case, only the image data is available at first sight. Additional EXIF-data in the jpg-image files of the V10 is available, but just a few parameter which are quite useless.
There are five main items to address: • the stability of each camera itself (test 1), • the stability of each camera itself within movement (test 2), • the 3d-reconstruction accuracy for 3d-points just using one camera, in a spatial forward section (test 3), • the stability of the relative orientation between several cameras (test 4) and • the offset of the camera projection centers in relation to the center of a mounted target (test 5).
These efforts lead to five tests: • 10 repetitions of images of a specific cam, here "cam 4", • 10 repetitions of images of a specific cam, here "cam 4" with usual movements between the shots, • two images of each camera performing a spatial forward section, • combined images in pairs and triples of all neighboring cameras to get the relative orientations and • calibration of some cameras relative to a mounted target.
In the next section (cf. sec. 2) the applied mathematical methods are decribed. In section 3 results of the tests for the V10 are shown. The article ends with the conclusions and an outlook (cf. sec. 4).

METHOD
2.1 Camera orientation 2.1.1 Exterior orientation The main equation for all analysis is the proejction equation of a 3d-point onto the image plane. In homogeneous coordinates this can be written by (Faugeras and Luong, 2001) with H the projection matrix, x1 = (x1, y1, z1) T the vector of ideal image coordinates and For projected image points x1 holds z1 = −c. This yields to the fundamental equation of photogrammetry in carthesian coordinates with the ideal image point x1 = (x1, y1) the focal length c, the projection center Xo = (Xo, Yo, Zo), and the elements of the rotation matrix from the camera coordinate system to the image coordinate system (cf. tab. 2). The projection center and the rotation matrix are called the "exterior orientation".

Interior orientation
Modeling the interior orientation of a camera using the interior orientation parameters is essential to achieve high reconstruction accuracies. We use the following paramaterization (Technet GmbH, 2010). (x1, y1) are the ideal image coordinates, concidering an error-free lense system (pinhole-camera model). These coordinates are corrected by the radial-symmetric correction with r 2 = x 2 1 +y 2 1 , the radial-asymmetric correction and principal point and the affine correction 2.1.3 Bundle adjustment All parameters of the exterior orientation and the parameters of the interior orientation are estimated in a bundle adjustment. The parameter estimation is done by Least-Squares-Adjudment (LSA) including an outlier detection. If y is the vector of observations (e. g. image coordinates) with an a priori known covariance matrix D(y) = P −1 , if β is the vector of unkown parameters and if X is the matrix of partial derivations of the observation equations (Schwidefsky and Ackermann, 1976), the estimated vector of the unkowns β follows † Vectors and matrices are written in bold letters. by β = (X T P X) −1 (XP y) with its covariance matrix and We assume for all measured image coordinates the same a priori standard deviation σo. Then the estimated σ 2 o is an important value to show the overall system accuracy. All estimated σo have to be multiplied by the a priori σo to get the image point measurement accuracy. The high accuracies are achieved because of the automatic detection and measuring of the coded targets. Therefore the accuracy of the point measuring process can be neglected.
To study the impact of a correction on a pixel coordinate, the correction itself is calculated. The impact of the standard deviation of the parameters results from the error propagation law and This is scaled by the image scale into the object space. The scale depends on the object distance.

Calibration
2.2.1 Relative orientation of the cameras The bundle adjudment provides exterior orientation parameters for all camera positions. Let us just take two of them of the same moment, say a and b . Both camera positions define camera coordinate systems Ka and K b . The exterior orientations for all cameras include the rotation matrix R a o and R b o which are the rotations from the object coordinate system into the specific camera coordinate system. The rotation from Ka into K b is follows from Assuming only vertical or horizontal rotations the rotation can be derived easily: The column (r3) b a of the rotation matrix is the rotated z-axis which is indeed the main viewing direction of the camera. The rotation angle results from the scalar product with the unity vector e2 for the horizontal viewing cameras "cam1" to "cam7".
2.2.2 Offset of a mounted target to the cameras To determine the offset between the camera projection centers and a trackable position, we add a mounted target on top of the V10. This The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Volume XLI-B5, 2016 XXIII ISPRS Congress, 12-19 July 2016, Prague, Czech Republic cos ω sin φ sin κ + sin ω cos κ cos ω cos φ (15) Table 2: Rotation matrix with ω rotation on x-axis, φ rotation on y-axis and κ rotation on z-axis, from local camera system to object system.
To calculate the interior orientation parameters some values have to be fixed (see tab.3): the sensor size, ro and the a a priori σo. Sensor size (h × w, see eq.1) 2.752mm × 3.669mm ro (see eq. 5) 1.529 A priori σo 5µm target is measured by some NIKON D800 images. Therefore, in this case, the target is realised as a coded photogrammetric target. The offsets follow from the Euklidean distance when (Xoi, Yoi, Zoi) are the coordinates of the rojection center of the camera "cam i" and (Xz, Yt, Zt) are the coordinates of the center of the mounted target.

RESULTS
The V10 has twelve cameras. The numbering of the cameras is given in fig. 3. A typical test image is shown in fig. 4. In the next Figure 3: Camera numbering of the V10: "Cam 1" to "Cam 7" are cameras looking horizontal, "Cam 8" to "Cam 12" to the ground.
section we are going through all experiments.

Test 1: Stability of each camera itself
In the first test (series A with 10 images, no movement of the camera) the repetability of the determination of the interior camera parameter is in our focus. We use camera 10 (front camera) for this test. Without any correction of the pixel coordinates σo = 4.23 which means the accurary of the pixel coordinates is approximately four times worser than our a priori assumption of 0.005µm.
Assuming a constant camera for all ten images results in σo = 0.0470 which is a reduction by the factor 100. This shows the feasibility of the calibration model. Due to effects of the robust  parameter estimation no improvement can be achieved by modeling a separate camera for each image. An arbitrary set of interior calibration parameters is in fig. 4.
A second series of 10 images (series B, no movement of the camera) verified this observation, σo = 4.126, σo = 0.0384 and σo = 0.0550 resp. All estimation results can be found in table 11 at the end of this article.

Test 2: Stability of a camera itself within moving
The second test focuses on the stability under movement of a camera of the V10. Again we use camera 10 (front camera) for this test. Without any improvement of the pixel coordinates a σo = 4.687 results. Assuming a constant constant camera for all images leads to σ0 = 0.0436. In the case of a separate interior orientation the estimated variance is σ = 0.0565. A parameter set of the interior orientation of an arbitrary images is shown in table 5. All estimation results can be found in table 12 (upper part) at the end of this article. In comparison to the tests in section 3.1 both results prove each other.
Summa summarum, both test show, that the front camera can be assumed as a fixed camera model with a constant interior orien-    Table 6: Results of test 3: Results of spatial forward section using one image pair of each camera "cam 1" to "cam 12" in mm. Point 66 is the true value. The points 66ij result from camera "cam ij" (all coordinates in mm).
tation. In addition, precise knowledge on the interior parameters of the camera is important the achieve precise geometric results in photogrammetric reconstruction. In this paper, we transfer this knowledge on the front camera to all remaining 11 cameras of the V10.

Test 3: 3d-reconstruction accuracy
In this section we address the question how the different cameras of the V10 should be handled for photogrammetric reconstruction. A series of image pairs of all cameras has been taken. All exterior parameters have been calculated by parameter estimation with different cases in modelling the interior orientation parameters. Finally an arbitrary point (no. 66) has been reconstructed.
Again, the results of this test show an improvement of the accuracy. σo diminishs from σo = 4.7049 to σo = 0.0932 introducing one single camera correction model for all cameras. When each camera has a separat parameter set of interior orientation parameters it goes down to σo = 0.0549, an improvement by factor approx. 2. All estimation results can be found in table 12 (lower part) at the end of this article.
To give an impression of the achivable accurary the point 66 has been reconstructed from all image pairs. The results are listed in table 6. In this case the interior orientation of each camera is modelled separately. For object distances of approx. 1m a reconstruction accuracy far below one millimeter can be achieved with simultaneous calibration. Up to this point we just focused on single cameras. The next tests take the combination of distinct cameras into account.

Cam
Cam ω φ κ  Table 7: Results of the relative orientation with simultaneous estimation of the image corrections (see sec. 3.4) Figure 5: Typical image to get the offset between the camera projection center and a mounted target.

Test 4: Relative orientation between cameras
In this test 4 simultaneous photos of the testfield were taken with multiple cameras. The exetrior and interior orientation parameters are caluclated. From the exterior orientation parameters the relative rotation from a camera system to its neighboring camera system is calculated (s. eq. (14) and (15)). Table 7 shows some results. These values differ from the approximate values given by the manufacturer. Therefore again, for precise photogrammetric reconstructions using simultaneously several cameras a precise knowledge of the relative orientation is necessary.

Test 5: Offset between mounted target and the cameras
In some cases the user wants to directly determine the position of the V10. For further calculations the 3d-offset between the mounted target and the projection centers of the cameras are necessary. This can be calculated from V10 testfield images (s. 5). Three example offsets are given in table 10 from the target coordinates (s. tab. 8 and the coordinates of the projection centers (s. tab. 9). 420 X 2606.521448 ± 0.1 Y 1023.863979 ± 0.1 Z 1238.880403 ± 0.1 Table 8: Resulting coordinates of the center of the mounted target from photogrammetric observations.

CONCLUSIONS
In this article a method for the calibration of a multi-camera rover and for the determination of the interior orientation of the cameras has been described and tested on some series of V10 photos. All images were taken in the testfield of the FHWS. The investigation showed that a Multi-Camera-Rover can be used for precise surveying purposes independent of the special software solution of the provider. Dependent on the object distance up to high accuracy measuring tasks can be handled. Therefore this new surveying technique becomes interesting for a larger group of users.
Sub-millimeter accuracies were achieved in the indoor environment. For larger distances the shown accuracies have to be scaled.   Table 10: Resulting distances of the offset calibration between two projection centers and the center of the mounted target (see sec. 3.5).
Nevertheless with appropriate camera modeling, including the interior orientation parameters of all cameras acceptable accuracies for a wide range of applications can be achieved (e.g. cadastra surveying).
The test showed that the effort is quite high to get inside the technique of a MCR. Specialized software is needed to handle of necessary parameters (lense distortion, relative orientation and offsets). Trimble provides this for their users in an almost "black box". System checks are still quite superficial. The future will show the necessity of deep system checks. A freeware software to handle MCR would be interesting for independent results of V10 data and of the data of different MCR.
These tests of a MCR are just the starting point of a series of research: for a complete description of a MCR, a calibration cage for panorma sensors and a combination of panaroma photogrammetry to terrestrial laser scanning are envisaged.
The planning of a complete surveying campaign is still challenging, although Trimble gives its users a lot of hints for their work. Starting on known survey points for taking panoramas might be a solution to approach this new measuring techniques. An other example for a starting project with fixed tacheometric control points is given in fig. 6 for a small object (birdhouse) reconstruction.
The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Volume XLI-B5, 2016 XXIII ISPRS Congress, 12-19 July 2016, Prague, Czech Republic