FAST REGISTRATION OF TERRESTRIAL LIDAR POINT CLOUD AND SEQUENCE IMAGES

Image has rich color information, and it can help to promote recognition and classification of point cloud. The registration is an important step in the application of image and point cloud. In order to give the rich texture and color information for LiDAR point cloud, the paper researched a fast registration method of point cloud and sequence images based on the ground-based LiDAR system. First, calculating transformation matrix of one of sequence images based on 2D image and LiDAR point cloud; second, using the relationships of position and attitude information among multi-angle sequence images to calculate all transformation matrixes in the horizontal direction; last, completing the registration of point cloud and sequence images based on the collinear condition of image point, projective center and LiDAR point. The experimental results show that the method is simple and fast, and the stitching error between adjacent images is litter; meanwhile, the overall registration accuracy is high, and the method can be used in engineering application.


INTRODUCTION
At present, most LiDAR systems usually consist of a laser scanner and camera.Laser scanner is used for acquisition of three-dimensional (3D) spatial geometric and intensity information; camera is used for acquisition of sequence color images by revolving around a fixed axis.Registration of LiDAR point cloud and two-dimensional (2D) color image can enhance visualization and identifiability of 3D point cloud, and help object recognition and extraction (Alex et al, 2013).
Sequence images are composed by a set of images, but seamless registration of LiDAR point cloud and sequence images is difficult to solve.
To solve the problem of 2D-3D registration, a lot of methods have been developed (Mishra and Zhang, 2012).Due to a stereo pair of optical images can be used for 3D reconstruction by using photogrammetry techniques (Liu et al, 2006) and stereo vision (Sirmacek et al, 2013), the problem of image-to-point cloud registration can be changed into 3D-3D registration (Zhao et al, 2005).In this research direction, SIFT algorithm (Lowe, 2004;Böhm and Becker, 2007) is usually used for correspondent point extraction; and then 3D reconstruction is applied based on correspondent point pairs; last, ICP (Chen and Medioni, 1991;Besl and Mckay, 1992) is used for the registration of 3D dense point cloud from a pair of adjacent images and 3D LiDAR point cloud (Li and Low, 2009).However, these methods are complicated, and accuracy of 3D reconstruction is easy affected by wrong correspondent point pairs.Several researches focused on the calculation of the In order to simplify the computational process, the paper proposed a simple and fast algorithm for calculation of transformation matrixes based on the inherent geometric relations among sequence images.

METHODS
The goal of the paper is to complete registration of terrestrial LiDAR point cloud and sequence images.In fact, the essence is to calculate all rigid transformation matrixes between 3D LiDAR point cloud of and 2D optical image.First, the paper calculates an accurate transformation matrix between one of the sequence images and point cloud based on collinearity relationship of laser point to complete 2D-3D registration, image point and projective center; then, the paper calculates other transformation matrixes based on the fixed location relationship between sequence images.

2D-3D registration
The collinearity relationship between 2D pixel coordinates of optical image and 3D point cloud coordinates of LiDAR data is physically meaningful and rigorous (Zhang et al, 2015), and is usually expressed in the form of transformation matrix which (R is rotation matrix, and expressed using Rodrigues matrix; T is translation vector) The Eq. ( 1) can be expanded using Eq. ( 2). [ Where u 0 , v 0 : image coordinates of camera's principal point f x , f y : focal length of the horizontal and vertical axis on optical image.
In Eq. ( 2), the camera intrinsic parameters can be got using camera calibration.X pixel and X LiDAR are the coordinate values of correspondent feature point, and can be set as known parameters.If there are at least three pairs of correspondent feature points, we can calculate the unknown rotation matrix and translation vector.

Registration of LiDAR point cloud and sequence images
The essence of registration between LiDAR point cloud and sequence optical images is to calculate all transformation matrixes, and then complete the projective transforms of 2D-3D.In practice, the camera is usually mounted on the terrestrial LiDAR system, and can get a set of 2D color images by rotating around a fixed axis (see Figure 1) (Barnea and Filin, 2007).Due to the camera revolves around a fixed axis which is z axis of LiDAR coordinate system in general, so all images angles related to x, y axis of LiDAR coordinate system are invariant.
In Eq. ( 3), the vector related to rotation matrix R angle can be expressed using V = [0 0 ] (  is the rotation angle between adjacent images).

RESULTS
The experiment used a terrestrial LiDAR system which includes a camera and laser scanner to get color information and 3D information in a scene.Before using, the camera needs to be calibrated, and then can get the camera intrinsic parameters R camera (see Section 2.1).LiDAR scans the scene in a horizontal direction at 0-360 degree (see Figure 2).

CONCLUSIONS AND DISCUSSION
The paper proposed a registration method of LiDAR point cloud and sequence images based on the inherent geometric relation between sequence images for terrestrial LiDAR system.high.However, if the rotation angle of the camera is not permanent, the method would be unstable, and appear large error.Therefore, the next research will focus on the unfixed rotation angle.
consists of translation vector and rotation matrix.The translation vector of x, y, z coordinates represents the location relations of coordinate systems between LiDAR and camera; the rotation matrix which is calculated by the angle of roll, pitch and yaw, represents the pose of 2D optical image in LiDAR coordinate system.The collinearity equation between 2D pixel coordinates and 3D point cloud coordinates is expressed by Eq. (1).  =       (1) Where X pixel : pixel coordinates on optical image (  = [ ]  ) R camera : camera intrinsic matrix R T : transformation matrix between 2D pixel coordinates and 3D point cloud coordinates of,   =

Figure 4 .
Figure 4. Registration of LiDAR point cloud and sequence images

First
, the paper used the collinearity relation between 3D point cloud, image point and projective center to calculate the rigid The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Volume XLII-2/W7, 2017 ISPRS Geospatial Week 2017, 18-22 September 2017, Wuhan, China transformation matrix between one image and LiDAR point cloud; then, the paper calculated all other transformation matrixes with the fixed rotation angle between sequence images, and completed registration of LiDAR point cloud and sequence images.The registration result is accurate, and overlaps of adjacent images are seamless.Research basis of the paper is based on a fixed rotation angle of the camera.When the rotation angle is known, the method which proposed by the paper can calculate quickly all transformation matrixes between images and LiDAR point cloud using inherent geometric relation.Meanwhile, the integrated error was decided by the first image and can't occur accumulate error.The method is simple, and the efficiency is The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Volume XLII-2/W7, 2017 ISPRS Geospatial Week 2017, 18-22 September 2017, Wuhan, China The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Volume XLII-2/W7, 2017 ISPRS Geospatial Week 2017, 18-22 September 2017, Wuhan, China