POINT CLOUD TRANSFORMATION USING SENSOR CALIBRATION INFORMATION FOR MAP DATA ADJUSTMENT

In order to operate autonomous vehicles and unmanned delivery vehicle, it is important to accurately acquire location of the device itself. However, since these devices are mainly operated in urban areas, there is a limit in obtaining location information based on GNSS. Therefore, it is necessary to utilize a method of calibrating its own location information by measuring the reference point provided by the existing high-precision map of the region. Point cloud based multi-dimensional high-precision maps are acquired in advance using high-performance LiDAR and GNSS devices for infrastructure such as roads, and provide a reference point for autonomous driving or map updating. Since such high-performance surveying equipment requires high cost, it is difficult to attach to autonomous vehicles or unmanned vehicle for commercialization. Therefore, autonomous vehicles or unmanned delivery vehicle are operated with relatively low performance LiDAR and GNSS, so it is often impossible to accurately measure the reference point, which directly leads to a decrease in the accuracy of the location information of the device. To compensate for this, this study proposes a point interpolation method to extract GCP information from sparse point cloud maps acquired with low performance LiDAR. The proposed method uses calibration parameters between point data and the image data acquired from the device. In general, images provide higher resolution than point clouds, even when using low-end cameras, so that the position of point coordinates relative to a reference point can be measured relatively accurately from the image and projection data of the point cloud. The data acquisition vehicle is an MMS vehicle that provides a panoramic image using four DSLRs and a point cloud with Velodyne VLP 16. The researchers first conducted a reference point survey on features such as road signs. The panorama image including the road sign was transformed into a bird eye’s view, and point projection was performed on the bird eye’s view image. The reference point coordinates, which were not acquired by the point cloud, were obtained from the shape of the road sign in the bird eye’s view image, and the accuracy was compared with the measured data. * Corresponding author


INTRODUCTIONS
Construction of multi-channel spatial information has become an essential element in the construction of smart cities (Gruen, 2013;Roche, 2014). This is because it is essential to build accurate spatial information to implement infrastructure management, which is a core element of smart cities, and to operate unmanned moving objects (Um, 2017;Xie et al., 2019). However, urban spatial information such as roads has the characteristic of changing from moment to moment. Because dynamic city changes cannot be controlled, mapping update vehicles must be continuously operated in the city to keep the latest spatial information at all times. This mapping update vehicle measures and reflects the changes in the city from time to time in reference data that are very precisely constructed with ground LiDARs. Therefore, a number of mapping update vehicles are operated at all times, so it is necessary to detect changes in the city (Anjomshoaa et al., 2018). The mapping update vehicle is equipped with a LiDAR for point cloud, image sensor, and GNSS / INS equipment. The core data is a point cloud that acquires spatial information, which has a very low point density than the initial reference data due to the characteristics of a mobile-operated mapping update vehicle.
Occasional occlusion of GNSS data occurs in urban areas. Especially, the occlusion of GNSS data is very large in the sections such as under the bridge and overpass, so the mapping update accuracy of the vehicle updating the map based on the GNSS location falls (Groves et al., 2012;Wang et al., 2013;Zhu et al., 2018). Therefore, in preparation for such a case, the mapping update vehicle measures the same point as the previously measured reference point on the reference data during operation, and corrects location information through comparison between the measured value and the reference value. These reference points are constructed in the form of points, and the correction amount is also accurate when very precise surveying is done in points. However, due to the nature of the mobile mapping system that acquires data at the same time as running, due to the lack of equipment point density and rapid movement, accurate point-by-point measurement of the reference point location is often not performed. Therefore, if point-by-point measurement of the reference point is not possible, a process of supplementing the reference point through interpolation from the surrounding coordinate system is necessary. Through the acquisition of the reference point through interpolation, it is expected that the accuracy of the map constructed by the mapping update vehicle will be improved.

Data acquisition
Data acquisition is performed along bicycle roads in Seoul, Korea using our own MMS (Hong et al., 2017). Among the acquired MMS point clouds, data of areas where the GNSS accuracy of the point cloud decreases while passing under the bridge was extracted. As shown below, 44 points of GNSS was surveyed by selecting an area where GNSS is unstable due to occlusion under the pier, etc., and compared with coordinates obtained by MMS.

Methods
In this study, calibration information between MMS devices and previously acquired reference point coordinates were used as a supplementary method for reference point surveying for map update vehicles. The MMS system acquires various data such as images, GNSS, and point clouds from various sensors on board, and these c can be fused by MMS bore-sight and lever-arm calibration. body frame( B ) and map frame( L ) can be expressed mathematically through rotation and translation among coordinate systems. The coordinate system of the image and the LiDAR is integrated based on the INS coordinate system, which is then projected onto the map coordinate system. The mathematical model expressing the geometric relationship can be defined by Equation (1). The MMS system acquires various data such as images, GNSS, and point clouds from various sensors on board, and these data can be fused by MMS bore-sight and lever-arm calibration. And all the sensors in the MMS were time-synchronized, so that they were configured to acquire data almost simultaneously with the moving MMS. These simultanesou data acquisitions occurr within a short frequency period of 10Hz, and formed a s et to record the spatial data concretely. A set of data from the GNSS/INS, camera image, and LiDAR point cloud acquired at the same moment can be integrated within the same frame. The point cloud is linked to GNSS data and measured in absolute coordinates, which are projected onto image pixels. Therefore, if the absolute coordinate value of an arbitrary position value acquired in the image is known, on the contrary, it is possible to interpolate the error of the point cloud through comparison with the coordinate value of the point cloud acquired by the LiDAR of the device. Therefore, even if the mounted LiDAR does not accurately measure the reference point and thus does not acquire the reference point, if it is acquired as a high-resolution image, it can be estimated by interpolating point coordinates from the location of the previously acquired reference point shown in the image. Each individually acquired data value is integrated and expressed as shown in Figure 3. By projecting the point cloud with absolute coordinates onto the images, the absolute coordinates of the desired objects can be obtained. The workflow is shown in Figure 4 below. The LiDAR and the camera attached to the MMS move along the road and proceed with the mapping. In the meantime, coordinate of a target point whose coordinate is already acquired by precise GNSS device, is also acquired. The error between the coordinates acquired by The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Volume XLIII-B3-2020, 2020 XXIV ISPRS Congress (2020 edition) the LiDAR and the previously acquired GNSS reference is confirmed through the visual error check through point projection between image and point cloud.  After assuming that the coordinate value of the reference point acquired by the LiDAR contains an error, the conversion coefficient between the reference point data was calculated through LESS analysis with the value previously acquired by the precise GNSS device, and then applied to the correction of the remaining mapping point values. The transformation is assumed as 3D transformation on homogeneous coordinate system. 33 out of 44 target points were used as GCP and the remaining 11 points (point number 5,9,13,17,21,25,29,33,37,41) were used as check points to understand the effect of improving data accuracy.

RESULT
Before data coupling and coordinate adjustment through image projection, the measurement error of MMS for 33 reference points was 0.0967m, but after correction of point cloud coordinates through coordinate adjustment, the error amount for 11 checkpoints was improved to 0.053m. The data used are shown as tables below. Table 1 and Table 2 below show the coordinates of GCPs obtained by GNSS and MMS respectively.

CONCLUSION
It is very important to accurately understand the current location of the device in order to ensure the safe operation of autonomous vehicles and the high positioning accuracy of mapping vehicles such as MMS that creat a high definition base map. However, it is difficult to obtain accurate location information of a device when passing a GNSS occluded area, such as under a bridge, beside a building. In this study, sensor calibration information was used to reduce the data error, when operating an MMS that acquires data simultaneously with the operation. The proposed method uses calibration parameters between point data and the image data acquired from the device. In general, images provide higher resolution than point clouds, even when using low-end cameras, so that the position of point coordinates relative to a reference point can be measured relatively accurately from the image and projection data of the point cloud. The point cloud coordinate error that occurs when acquiring data with MMS was corrected through applying the transformation information between the couple of reference point coordinates acquired by a highprecision GNSS and MMS-equipped LiDAR. The couple matching of two coordinate data was performed through projection of image and point cloud using the sensor calibration information as bore-sight and lever-arm calibration. As a result, the data acquisition error of 0.0967 m before correction could be reduced to 0.053 m.
However, the proposed method has limitations that the reference coordinates of Ground Control Points should be previously acquired and properly managed so that the method can utilize the information. If automation of reference point detection is achieved through further research such as image matching, it is expected to contribute to automatic map update using MMS.