THE ONE TO MULTIPLE AUTOMATIC HIGH ACCURACY REGISTRATION OF TERRESTRIAL LIDAR AND OPTICAL IMAGES

The registration of ground laser point cloud and close-range image is the key content of high-precision 3D reconstruction of cultural relic object. In view of the requirement of high texture resolution in the field of cultural relic at present, The registration of point cloud and image data in object reconstruction will result in the problem of point cloud to multiple images. In the current commercial software , the two pairs of registration of the two kinds of data are realized by manually dividing point cloud data , manual matching point cloud and image data , manually selecting a two dimensional point of the same name of the image and the point cloud , and the process not only greatly reduces the working efficiency , but also affects the precision of the registration of the two , and causes the problem of the color point cloud texture joint . In order to solve the above problems, this paper takes the whole object image as the intermediate data, and uses the matching technology to realize the automatic one-to-one correspondence between the point cloud and multiple images. The matching of point cloud center projection reflection intensity image and optical image is applied to realize the automatic matching of the same name feature points, and the Rodrigo matrix spatial similarity transformation model and weight selection iteration are used to realize the automatic registration of the two kinds of data with high accuracy. This method is expected to serve for the high precision and high efficiency automatic 3D reconstruction of cultural relic objects, which has certain scientific research value and practical significance * Corresponding author: huchunmei@buca.edu.cn


INTRODUCTION
The automatic registration of ground 3D laser point cloud and close-range images is a difficult problem in the registration of two kinds of non-homologous data.At present, the automatic registration of the above two source data is realized mainly through the matching of the ground three-dimensional laser reflection intensity image and the close range image.Automatic registration is divided into two parts: the automatic correspondence of the high resolution image and the point cloud, and the matching and registration of the same name points of the point cloud and the image.Now, there has been a lot of research progress in automatic registration methods, In the matching of basic elements, it is mainly reflected in the feature points of the same name, the automatic extraction of the line of the same name points, most of the research focuses on airborne images and airborne radar data with POS data; The registration methods include collinear equation method, angular cone method, direct linear transformation method, and direct solution based on Rodrigo matrix.There is no auxiliary information such as POS in the laser radar data of the objects of cultural relics on the ground, Its image data does not have the topological information that aerial images have, or the one-to-one correspondence between the image and the corresponding point cloud, the extraction of the two data homonyms brings some difficulty.The large amount of point cloud data and the large number of high resolution images are the main factors that affect the automation.In response to the above problems, in this paper, it is proposed that the projective reflection intensity image and the corresponding global image can be used as mediators.Using image matching and other techniques to realize the high resolution image and point cloud one to one correspondence, after extracting the same name feature automatically, the Rodrigo matrix mathematical model and the iterative method of weight selection are used to obtain the high precision registration parameters.The specific methods are provided as follows: 1. Generation of Point Cloud reflection intensity Center projection Image and RGB Image: Firstly, the object point cloud data are acquired by the ground laser scanner, and the maximum outlier box of the point cloud is calculated.;Then, the image datum level of point cloud reflection intensity image is determined by the wrapped box, the size of reflection image is determined by the intersection region of laser ray and datum plane, and the grid of reflected image is determined by point cloud resolution; Finally, the projection reflectance intensity image and RGB image of point cloud center are generated by nearest neighbor interpolation.

One to multiple matching of point cloud and images:
According to each point cloud partition, the whole image of the region is obtained, and the local high resolution image with the required resolution is obtained.Firstly, the point cloud RGB image and the whole image are matched to make the whole image correspond to the point cloud one by one.According to the matching of the whole image and the local image, the location of the local image in the whole image is determined.According to the above results, the local image and the point cloud are automatically oneto-one corresponding with the whole image as the intermediary.
3. Automatic high accuracy registration of point cloud and images: For the point cloud data corresponding to the local high precision image, the center projection reflection intensity image and the RGB image are generated according to the method of 1, and the point cloud RGB image is used to match the corresponding optical image with the point cloud RGB image.The rough registration of two kinds of data at any angle is realized by using the feature points of the same name and the registration model based on Rodrigo matrix.In high precision iterative least square registration collinearity equation, standardized residuals Danish method using improved iteration method, the crude almost deweight, with iteration, outliers weight of observation value will become more and more small, to zero, the iteration terminates, and the adjustment results will no longer be the influence of gross error.The collinear equation is used as the mathematical model and the texture mapping is used to generate the seamless color point cloud model.

METHOD
For an object, to take block photography in a certain way.For example, buildings can be built in ways such as east, west, north, south, inside and outside, the corresponding point cloud data are also partitioned in the same way.In order to match the global optical images of each object, the focal point cloud is used to generate the center plane projection reflection intensity image and the RGB color image, and the relation between the block point cloud and its optical image is established.

point cloud center plane projection image generatio
Central projection is the projection mode in which projection rays converge at a point.The main ideas and steps are as follows: Determine the point cloud center point projection plane.The center point of 3D laser scanner is the projection center and the center point of 3D laser point cloud is the Projection center point.A projection plane is a plane of lines perpendicular to the point at the center of the projection to the center of the cloud.Here is the projection plane formula： And then the point where the projection ray intersects the plane from each point in the projection center toward the point cloud is calculated.That is, the projection point of a point in the point cloud on the projection plane.As shown in figure 1.

One-to-one correspondence between local images and heir corresponding point clouds
According to the reflection intensity image and RGB image of the projecting point cloud in the center of the object block, the whole image of the object and the local image data with high resolution are obtained by using the digital camera.The pixel resolution of the whole image is adjusted so that the resolution of the whole image is basically the same as that of the point cloud RGB image.The corresponding relationship between the point cloud of each block and the whole image is established by using SIFT matching to obtain the point of the same name.The high resolution local image data and the whole image are matched by SIFT and the corresponding transformation relationship is established.
The one-to-one correspondence between the local image and the global image is established, and the affine variation of the local image and the global image is established according to the matching point of the same name.The relationship between the local high resolution image and the point cloud reflection intensity image can be obtained by using the whole image as the intermediary.According to the affine transformation parameters, the minimum outer rectangle of the point cloud reflection intensity image can be obtained.According to the 3D coordinates of the point cloud reflection intensity image in the region, the minimum outsourced box algorithm is used to calculate the point cloud region corresponding to the highresolution image.And the point cloud and image of the high accuracy registration, the basic process is shown in figure 4. Firstly, the stable features are extracted and described in the scale space, and then the generated feature vectors are matched.
The main steps of SIFT matching include five steps: the extreme detection of scale space, the precise location of key points, the determination of the main direction of key points, the description of key points and the matching of key points.
The detection of the extreme value of the scale space is mainly through the scale transformation of the original point cloud to obtain the scale space representation sequences under the multiscale image.The main contour of these sequences is extracted in the scale space, and the main contour is used as a feature vector.Edge detection, corner detection at different resolution of the key point extraction and so on.The construction of scale space is based on the DOG pyramid.In order to find the extremum of the DOG function, each pixel should be compared with all its adjacent points to see if it is larger or smaller than its adjacent points in the image domain and the scale space domain.
The accurate location of the key points.DOG is sensitive to noise and edges, so the local extremum detected in the scale space of the second step has to be further screened to remove the unstable and error-detected extreme points.
In this way, the performance of matching can be enhanced, the result of matching is more stable and the ability of resisting noise is strong.The main direction of the key points is determined .The stable extreme points are extracted under different scale spaces , which ensures the scale invariance of the key points .The problem to be solved for the key distribution direction information is that the key points have the invariance to the image angle and the rotation .The direction assignment is realized by finding the gradient of each extreme point .Key point description.The description of key points is a key step in the subsequent implementation of matching.In fact, the description is a kind of mathematical definition of the key process.The descriptor includes not only the key points, but also the neighborhood points around the key points.
Key points match.The matching of feature points is realized by calculating the Euclidean distance of 128 dimensional key points of two groups of feature points.The smaller the Euclidean distance, the higher the similarity.When the Euclidean distance is less than the set threshold, it can be judged as a successful match as shown in figure 5.The registration methods include collinear equation method, angular cone method, direct linear transformation method, and direct solution based on Rodrigo matrix.Through the analysis, the collinear equation solution and the angular cone solution need better initial value, and complete the calculation by iteration, the direct linear transformation method has certain requirements to the control point, when the control point distribution or approximate distribution is in one plane, the result is wrong.And need six pairs of more than the same name points; The direct solution to the Rodrigo matrix is mainly through three pairs of control points.The transformation model is not strict enough to calculate the external elements of the image by the addition, subtraction and multiplication of the matrix.In this paper, a registration method from coarse to fine is used.The matching feature point obtained from section 2.2.4 is the matching primitive element.Firstly, the barycenter coordinate is used to simplify the spatial similarity transformation model formula (3) Then the improved iterative method of weight selection of the standardized residual Denmark method is used to reduce the weight of the small gross error until the registration meets the precision requirements.

Experimental data
In this paper, a single side relief of ancient Egyptian sandstone slate is taken as an example.The experimental material is 1m L, 0. 8m W, 0. 25 m thick.FARO T330 is used to obtain 3D Terrestrisl Laser data, Cannon 600D is used to obtain close range image.The sampling interval of point cloud is 2 mm, total 157528 scanning points.The advantages and reliability of this method are verified by experimental data.

Analysis
Combined with the above experimental data, the method is realized by using VS2013 platform and OpenCV3.

Figure 1 .
Figure 1.The point cloud is projected to the plane by the central projection

Figure 4 .
Figure 4. One-to-one flow chart of point cloud and image

Figure 5 .
Figure 5. Matching results of optical images with point cloud RGB images

Figure 7 .
Figure 7. Point clouds corresponding to local images a) Local image, b) Global image, c) RGB point cloud image, d) Ppoint cloud gray image, e) Bureau local image corresponding point cloud region

Figure 8 .
Figure 8. Acquisition of points with the same name from local images and point cloud RGB images of antisymmetric matrix formula (4) and orthogonal matrix formula (5), the model formula of registration parameter angular element is obtained (6).

Figure 13 .
Figure 9. Point Cloud reflection intensity Image