3D model of building based on multi-source data fusion

: The process of building a digital city is the construction of a three-dimensional city model. The construction of 3 d data model is the process of multi-source spatial information collection and fusion. The current means of spatial information collection are mainly divided into three-dimensional laser technology and UAV photogrammetry technology. The three-dimensional laser technology is mainly based on ground laser, which has high acquisition accuracy for the bottom part of the building, but it is insufficient for the information collection at the top of the building. UAV photogrammetry has a large area and high efficiency spatial information collection method. At the same time, a large three-dimensional city model of the building can also be constructed. However, due to practical problems such as flight height and ground occlusion, the model constructed by UAV photogrammetry has a perfect top and poor bottom quality. The two methods are combined to build a more perfect point cloud model. Aiming at the above problems, a three-dimensional modeling method of buildings under multi-source data fusion is proposed. Firstly, the point cloud data and image data obtained by 3D laser scanner and UAV are preprocessed, and the two data forms are transformed into high-precision point cloud data. Then two sets of point cloud data are fused to generate the whole point cloud data. Finally, based on the patch generated by the whole point cloud, the point cloud model is reverse modeled. Get the corresponding building entity model, in order to get a more perfect 3D city model.


INTRODUCTION
With the rapid development of modern society in ' digital earth ' and ' smart city '.The two-dimensional geographic information of traditional surveying and mapping can no longer meet the needs of the rapid development of all walks of life, and the high-precision three-dimensional model of the city with real texture is receiving more and more attention.Through the three-dimensional digital model, the terrain and spatial information can be expressed more directly and completely.It is an indispensable data source in the construction of ' 3D city ' and has broad application prospects.Due to the limitation of the measurement process of the three-dimensional laser scanning instrument, it is inevitable that there will be distorted hole gaps and tilt problems on the point cloud model after registration.The lack of integrity of the three-dimensional model is not detailed enough, so the researchers proposed to continue to use three-dimensional laser scanning technology to measure the object to be retested.However, due to the existence of the instrument itself, volume, weight and other costs in most cases the secondary measurement is not as desirable.On the contrary, UAV tilt photogrammetry technology is flexible, high measurement efficiency and low cost, so many researchers have carried out research on this basis.Registration and fusion of UAV images and 3D laser scanning has become a research hotspot in recent years.Professor Tang Xuehai used UAV photogrammetry combined with three-dimensional laser scanning technology to restore the single tree space entity for accurate detection of forest assets.Balcavia discussed the advantages and disadvantages from the laser scanning technology and photogrammetry technology used, and verified the possibility and application prospect of the combination of the two through practice.HabibA uses Hough iterative transform and voting algorithm to register nuclear magnetic resonance images and laser point clouds on the basis of curved surfaces as primitives, and obtains higher accuracy.GuarnieriA combines laser scanning technology with photogrammetry technology for the reconstruction of historical buildings.RouxmlS combines 3D laser scanning and aerial photogrammetry to reconstruct 3D model by registering point cloud and image with simplex matching strategy.

METHOD
In the formula, μ is the scaling coefficient of the two-point set coordinate system, R is the rotation matrix, T is the translation matrix: x0 ， y0 ， z0 ,are translation values in each coordinate axis direction.Therefore, the registration model contains three rotation parameters α , β , γ and three translation parameters x0, y0, z0 respectively.According to a scaling parameter data, the tilt image point cloud in o-xyz can be converted to the ground laser point cloud coordinate system in O-XYZ by this algorithm.Assuming that the axes rotate around the Z, Y, and X axes, R ( α, β, γ ) can be expressed as: (3) Therefore, if the above seven conversion parameters are settled out, the mapping relationship between oblique image point cloud and ground laser point cloud can be established.Thus point cloud fusion registration.

DATA SOURCE
This paper selects an experimental building as the research object.The experimental building covers an area of medium, the building is standardized and independent, and the surrounding is relatively empty without trees and other shelters.To facilitate the establishment of 3D laser scanner site for data acquisition, low floors can reduce the 3D laser scanner top scan data is not accurate enough to bring data errors.At the same time, the surrounding unshielded UAV tilt photogrammetry, aerial time flight height can be relatively reduced.To ensure the accuracy of the UAV top data intact.Obtain point cloud data using the FARO Focus3D laser scanner.Use the Pegasus D2000 drone to obtain impact data.The original point cloud data and image data need to be preprocessed to form multi-source heterogeneous point cloud data for registration.The original point cloud data is preprocessed using SCENE.The original image data is preprocessed using ContextCapture.The data acquisition and processing flow is shown in Figure 1.Point cloud data preprocessing:Three-dimensional scanning of the entity after the building will get a huge high-density point cloud data.Due to the influence of the field environment and other factors during the scanning process, these high-density point cloud data will generate inevitable noise points.Therefore, in order to facilitate the registration of subsequent point clouds and ensure the accuracy of the model during 3D reconstruction, it is necessary to preprocess these high-density point clouds, including point cloud denoising and point cloud data compression.The process of extracting the target entity from the noise points is called point cloud denoising.The more obvious noise information includes green vegetation next to the physical building, temporary parking obstacles.Using filter operator such as Laplace operator filtering method, based on the least squares fitting filtering algorithm, in the actual denoising process these methods are used in combination, only a single denoising method is difficult to deal with all the noise data.

4.1.2
Scanning processing: In SCENE, a new project is created for data processing.Import the obtained preprocessed data into it.Image scanning processing.The imported original image data is a black and white image, which is scanned into a color image as shown in Figures 2 and 3.

4.1.3
Registration: Scan processing is completed after registration.This processing uses manual registration to label the photo data with homonymous points.Take two to three homonymous feature points on each of the eight photos to ensure accuracy during registration, as shown in Figure 4. On the selected image, the cross mark is used for point selection, and two adjacent scan station images are registered and verified as a group to obtain a cluster.

UAV image data processing
As the core work of UAV aerial survey system, image data processing includes UAV image preprocessing, camera high-precision detection, UAV image matching, aerial triangulation calculation, DOM generation and seamless splicing 3D model reconstruction.Among them, aerial triangulation is the core content of UAV image processing.The quality of the results directly affects the accuracy of DEM and DOM and 3D modeling in the later stage.It mainly includes the following three aspects : (1)Extraction and matching of feature points : SURF algorithm is used to select the same feature points in different UAV images for matching, and RANSAC algorithm is used to optimize the matching UAV image results, so as to improve the accuracy.
(2)Relative orientation and absolute orientation : Relative orientation is performed by the focal length information of the photo and the matched feature points.This restores the spatial posture and position of each photograph taken.Absolute orientation through the control points of field measurements, so that each photo has absolute spatial coordinates.
(3)Bundle adjustment : recover the projection beam according to the extracted feature points and connection points.According to the bundle adjustment model, the whole study area is calculated to obtain the image point coordinates and the interior and exterior orientation elements of the photos.

4.2.1
Image data import : a new project, select the image in the new block, by adding the image selection or add the entire directory to import the image file ContextCapture.Confirm that the sensor size and fill in the focal length is correct.As shown in Figure 6.

4.2.3
Aerial triangulation calculation : After determining that the marked connection point or the same name point is correct, click submit aerial triangulation under the summary page to perform aerial triangulation calculation.In the pop-up aerial triangulation setting window, there is no special calculation requirement, the parameters remain default settings, and finally the aerial triangulation calculation is performed.

4.2.4
Submit the reconstruction task :aerial triangulation calculation is completed after viewing the report to confirm the calculation parameters are qualified.If the aerial triangulation report accuracy problems to re-select points, re-calculation.Ensure data accuracy is intact so as not to affect the reconstruction project, modeling problems such as holes.After checking the report, build a new reconstruction project.Divide the project area to facilitate re-modeling.After waiting for the completion of the reconstruction project, the required UAV image data model can be obtained as shown in Figure 8. Check whether the model has problems such as holes, tilts, and distortions, and it is necessary to repeat the construction.When selecting feature points, the selected features are obviously easy to label, and can also contain the points of the whole model, such as the building frame points.This feature point is selected as the top four house corners.At the same time, in order to ensure the accuracy of manual visual selection, the model should be enlarged and adjusted to a good visual angle for selection.ICP fine registration is an automatic iterative registration.The two point cloud models are selected as the reference model and the registration model respectively, and the corresponding number of iterations, iterative error threshold, point cloud overlap and so on are set to make the two point cloud model registration better.The registration result is poor.The second registration can be repeated to achieve accurate registration model.Finally, the two registered point cloud models are merged into a whole point cloud model as shown in Figure 10.

Point cloud generation of whole patches
The point cloud is processed using the point processing function in Geomagic Design X, and the scanner model and accuracy are entered in the patch genius to initially generate the entire patch.After that, the triangular patch processing is performed to delete the repeated point cloud and delete the stray points to improve the accuracy of the patch.Finally, the patch is smoothed and optimized, and the domain is segmented to generate a patch model with high accuracy.Figure 11.

Reverse modeling based on facet
First, the use of alignment function to establish a coordinate system so that the xyz axis and the house line alignment, so that the next drawing processing.After that, the attached plane of the building is established by means of point capture on the patch, and a sketch is drawn on the basis of the established plane.After that, the surface sketch function is applied in the sketch window to select the projection plane and the reference plane to draw the projection sketch.The sketch is drawn by automatic sketch and manually modified.Finally, the distance measurement function is used to measure the distance of each solid model part.Solid modeling of the measured values of the sketch stretching input.Figure 12.

Model Optimization
The solid model file in Design X is exported in STP format and opened in 3ds MAX to optimize the solid model.Application material editor from the object material pickup function, in the patch to pick up the original image material and color.Manual color matching of inaccurate areas of color picking makes the model more consistent with the actual building.Obtain the top view and side view of the building, and establish the patch sketch based on the model to obtain the side view and top view.The sketch file is exported into DXF format, which can better measure the distance of line segments for accuracy evaluation.Figure 13.

PRECISION ANALYSIS
The entity model can collect the distance information between two points on the two groups of entity models through the distance measurement in Design X, and the point number of the distance measurement is shown in Figure 14.Through the difference between the distance of the solid model and the actual distance, Δ is obtained, and the error σ is calculated by formula 4. The measurement is carried out at 8 vertices on the surface of two solid models, and the error calculation of two groups of models is shown in table 1.The advantage of this scheme is that the point cloud data is fused first, and the fused point cloud data is modeled retrograde.This operation only requires the creation of a reverse modeling file.The patch generation of the fused point cloud is a whole patch.In the reverse modeling, the final solid model can be more beautiful.In the process of top point cloud data and bottom point cloud data fusion and overall patch generation, the software will optimize the overall data and ensure the accuracy of modeling.

SUMMARY
In this paper, three-dimensional laser point cloud model and UAV image fusion are used to carry out reverse modeling.UAV tilt image data and ground laser point cloud can be fused by selecting feature points, and through this fusion method can achieve higher registration integrity, make up for the lack of ground three-dimensional laser on the top of the collection, and the problem of UAV on the bottom of the building is not detailed enough.The combination of the two constructs a more perfect model, and then through the reverse modeling of the fusion point cloud model to build a three-dimensional solid model, so that the building model is more three-dimensional.The combination of UAV tilt photography and three-dimensional laser scanning technology has a positive significance in many fields.The regional situation measured is more comprehensive and detailed, which can help relevant staff to master the situation of surveying and mapping areas more accurately.At the same time, the combination of technology saves a lot of manpower and material resources in the past and a lot of time spent in subsequent data processing.Saving economic costs, improving production efficiency and improving product quality are the advantages.With the development of the times, the method of ' space-ground ' data fusion model construction will gradually become the mainstream model production mode.This paper is limited to the analysis of the reverse model of buildings, and does not involve other aspects.Future research should be extended to the reverse modeling of roads, bridges, tunnels and other structures, and its feasibility is analyzed.The combination of the two technologies has its limitations, that is, the distance between buildings, the effective scanning distance of the three-dimensional laser scanner and other factors will seriously affect the integrity of the point cloud, especially for high-rise buildings.

Figure 1 .
Figure 1.Flow of data acquisition and processing.

Figure 2
Figure 2 Black and white image before scanning.

Figure 3
Figure 3 Color image formed after scanning.

Figure 4
Figure 4 Registration selection.4.1.4Point cloud model generation :Automatic modeling of registered and verified data.SCENE can automatically model the registered data to obtain the required point cloud data model.Check the data model to see whether there are problems such as gap offset.Review the accuracy assessment report to verify that the model is accurate and complete.If the accuracy is not up to standard, registration point selection problems can be re-registration.After registration, the processed point cloud model can be obtained as shown in Figure 5.

Figure 5
Figure 5 Point cloud model.

Figure 6
Figure 6 Image import and parameter filling.
these three aspects of feature point extraction matching is the most important step, feature point selection is a process of computer image processing.It refers to the use of computer to extract the image point information of the same name point in the image, which determines the same feature in different images.Image feature extraction generally depends on the location of the building, the shape of the building and the size of the point.Point features mainly include obvious point-line features in the image, which are the conformations of the edges of linear or planar objects in the image.The selection of feature points is shown in Figure.7.

Figure 7
Figure 7 Selection of feature points with the same name.

Figure 8
Figure 8 UAV image model.5. DATA FUSION The fusion of ground 3D laser point cloud data and UAV image data is the fusion of two point cloud data.In the fusion process, the second registration is carried out, namely feature point coarse registration and ICP dense point fine registration.The point cloud model in two coordinate systems is selected by using the feature point selection method of rough registration.By rotating translation in order to integrate into a coordinate system and then use ICP closest point iteration to make two point cloud model registration more accurate.Rough registration fusion uses the method of capturing feature points for point cloud registration.In this experiment, the corresponding feature points are extracted from the three-dimensional laser point cloud model and the oblique image point cloud model by manually capturing the corresponding feature points.If the artificial visual selection feature points can be as accurate as possible, then the final model registration integrity and accuracy are also high.This fusion extracts four homonymous feature point pairs as shown in Figure 9.

Figure 9
Figure 9 Rough registration feature points.

Figure 10
Figure 10 Fused point cloud model 6.REVERSE MODELING

Figure 11
Figure 11 Overall patch diagram.

Figure 13
Figure 13 Model diagram after optimization.

Figure 14
Figure 14 Schematic diagram of distance measurement.

Table 1
Comparison between actual data and model data This work was supported by the Scientific Research Project of Education Department of Liaoning Province：Research on Key Issues of Building 3D Reconstruction Based on Multi-source Data Fusion.(Project Nos.lnjc202015).