SENSOR EVALUATION FOR CRACK DETECTION IN CONCRETE BRIDGES

Bridges are one of the most critical traffic infrastructure objects, therefore it is necessary to monitor them at regular intervals. Nowadays, this monitoring is made manually by visual inspection. In recent projects, the authors are developing automated crack detection systems to support the inspector. In this pre-study, different sensors, like different camera systems for photogrammetry, a laser scanner, and a laser triangulation system are evaluated for crack detection based on a defined required minimum crack width of 0.2 mm. The used test object is a blasted concrete plate, sized 70 cm × 70 cm × 5 cm and placed in an outdoor environment. The results of the data acquisition with the different sensors are point clouds, which make the results comparable. The point cloud from the chosen laser scanner is not sufficient for the required crack width even at a low speed of 1 m/s. The RGB or intensity information of the photogrammetric point clouds, even based on a low-cost smartphone camera, contain the targeted cracks. The authors advise against using only the 3D information of the photogrammetric point clouds for crack detection due to noise. The laser triangulation system delivers the best results in both intensity and 3D information. The low weight of camera systems makes photogrammetry to the preferred method for an unmanned aerial vehicle (UAV). In the future, the authors aim for crack detection based on the 2D images, automated by using machine learning, and crack localisation by using structure from motion (SfM) or a positioning system.


INTRODUCTION
In Germany, there are more than 39,720 bridges with a total surface of more than 30.8 million m 2 (Bast, 2019). More than 87 % are concrete or pre-stressed concrete bridges (Bast, 2019). For all bridges, their safety has to be guaranteed. Trained and skilled inspectors document the conditions of the bridges by hand according to the inspection procedure of bridges based on DIN 1076, cf. (DIN 1076(DIN :1999(DIN -11, 1999. As defined, the main investigation and the second smaller inspection must be conducted at intervals of 6 years or 3 years respectively. One important part is crack inspection. The inspectors have to identify cracks and measure the crack width and length by using a crack scale card. Furthermore, it is suggested to outline suspicious crack patterns in a sketch.
Manual crack inspection is time-consuming and cost-intensive, especially for bridges with difficult accessibility. Moreover, it is based on subjective decisions. This can lead to mistakes and risks and decreases comparability. Therefore, it would be beneficial to support the inspectors with an automatized approach.
The authors are developing together with their project partners a demonstrator to increase the degree of automation for the evaluation of the stability of bridges. Our goal is the automatized sensor-based data acquisition of the bridge's surface with subsequent crack detection and creating a 3D crack map. This data will be used for finite element method (FEM) simulations and evaluations to determine the bridge's stability. Especially for building information modeling (BIM) in the field of civil engineering, it is still not clear which sensor data format is best to be used to include inspection information. * Corresponding author The first crucial part of the authors' work is finding an applicable sensor system that can be integrated on a mobile (kinematic) system to acquire 2D-or 3D-data of the bridge. Based on (DIN EN 1992-2:2010-12, 2010, the inspection of cracks with a minimum width of 0.2 mm is required. Therefore, the selected sensor system must fulfil this requirement. The authors decided to compare different camera systems, a laser scanner, and a laser triangulation system. Representative to a reinforced concrete bridge with cracks, a reinforced concrete plate with diverse cracks is used as a test object. Both the 2D-part (if applicable) and the 3D-part of the sensor data are analyzed concerning crack acquisition and detection.

RELATED WORK
Due to the high relevance of crack detection on concrete structures, diverse research has been conducted in this field. However, most of the work is based on crack detection algorithms for sensor data but not on the choice and trade-off of possible sensor systems. Prasanna et al. (2016) use a crack detection algorithm based on curve fitting and using machine learning classification. Their sensors for the data acquisition are two Canon Rebel T3i digital single-lens reflex (DSLR) cameras with Canon EFS 18-55 mm f/3.5-5.6 lenses mounted on a robot. Mohan and Poobal (2018) review 50 papers related to crack detection in common. They present the used processing techniques for ca-mera, video, IRbased, ultrasonic, laser, time-of-flight diffrac-tion, microwave, and radar images. However, they indicate that there is no full automated algorithm to detect all kinds of cracks. Furthermore, Sham et al. (2008) use flash thermography (FT) for crack detection on concrete surface. The FT is a short-duration pulsed thermography. They use a test plate with cracks as test object.
However, they state that cracks with a width below 0.5 mm require water as stimulus. Popescu et al. (2019) analyze the use of terrestrial laser scanning (TLS), close-range photogrammetry (CRP), and infrared scanning (IS) for 3D-reconstruction of concrete bridges. For photogrammetry, they take photos by hand and with an unmanned aerial vehicle (UAV). They see a benefit in RGB over intensity data since they are more trueto-life. Moreover, they note that both TLS and CRP deliver denser point clouds than IS. Giri and Kharkovsky (2016) use a laser triangulation method for detecting a 0.7 mm crack in a concrete test cylinder by measuring the displacement of a laser spot. Lastly, Truong-Hong and Laefer (2019) present the state-of-the-art in laser scanning for bridge inspection. They refer to Laefer et al. (2014) who state that experiments with a Trimble GS200 terrestrial laser scanner showed a dependency of the minimum detectable crack width from scanning angle, sampling step, distance, and laser spot size. Moreover, they stated that the sampling step must be half of the crack width. For a sampling step of 0.8 mm, a short distance scanning was required. However, based on the findings, a crack width of 0.2 mm would require a sampling step of 0.1 mm.
In previous works, single sensor systems were used or the results of multiple papers compared. However, a visual comparison of different sensor data at the same conditions with regard to crack detection is missing. In this work, the authors concentrate on optical remote sensor systems, having the potential to be used to automatize data acquisition of bridges by mounting them on a UAV, a robot, or a car. Laser scanners and laser triangulation systems are already used for road pavement inspection while camera systems are a low-weight option, already used for bridge inspection. In this paper, the focus is more on the sensor data than on possible post-processing algorithms helping to choose the right sensor system. The authors use point clouds due to the difficulty of comparing the different outputs of the chosen systems. However, the authors also test if point clouds are the right format for crack inspection using photogrammetry.

Camera systems
One possible system to detect cracks in concrete is a camera. The authors use a low-cost camera Sony ISOCELL S5K2L2 plug in a Samsung Galaxy S8. Besides, a digital single-lens mirrorless (DSLM) camera is used for this test. Due to their low weight, DSLM cameras are favored over DSLR cameras, especially when the camera is used on a UAV platform. In this case, the authors used a Fujifilm XT-10 with a Fujinon XC 16-50 mm lens. Furthermore, an industrial camera, the monochrome Manta G-419B camera of Allied Vision is used.

Laser scanner
As a laser scanner, the PPS-Plus from Fraunhofer IPM is used which is working on the phase-shift method and originally developed for pavement profile scanning. It is mounted on a car within a minimum object distance of 1.3 m. The PPS-Plus has a 3D-resolution of 4.5 mm × 28 mm and a 2D-resolution of 1.2 mm × 1.7 mm within a distance of 3 m and at a speed of 80 km/h by measuring 1800 points per profile with a frequency of 1 MHz, cf. (Fraunhofer IPM, 2019) and (Reiterer et al., 2016). The point cloud is created and georeferenced by using a navigation system, consisting of a GNSS, an inertial navigation system (INS), and odometry.

Laser triangulation system
In this work, the authors use a laser triangulation system consisting of the Sick Ranger3, cf. (Sick AG, 2020), an eye-safe Class 1 red laser beam, a 25 mm lens, and an f/1.8 aperture. The Sick Ranger3 is equipped with a CMOS sensor. This sensor has a pixel size of 6 µm × 6 µm and the camera could scan 46.000 3D-profiles per second. The resolution is given with 0.1 mm/pixel in x-and y-direction and 1.7 mm/pixel in zdirection. The scanning rate can be chosen dependent on the expected scanning velocity or triggered based on distance.

Test scenario
The testing object is a reinforced concrete plate with a size of 70 cm × 70 cm × 5 cm as shown in Figure 1. Some cracks appear to be extreme. However, due to its crack pattern, it includes a variety of different crack widths ranging from 0.1 mm to a few centimeters. Moreover, it contains multiple cracks with the required width of 0.2 mm making it a useful reference to compare the different sensor systems. Except for the laser triangulation measurement, which is indoor but with daylight, the plate is placed outdoor for realistic bridge conditions.

Data acquisition
4.2.1 Camera system For capturing cracks with a passive camera system, an external light source like daylight or an extra illumination is required. For both dimensions of the image, at least two pixels per smallest feature size are required as given by Andor (2019). Therefore, the minimum resolution r is defined as r = 2 · F OV /s with the field of view (F OV ) and the size of the smallest feature s. The F OV is defined as F OV = w · d/f with the sensor width w, the working distance d and the focal length f . For a crack width of 0.2 mm as smallest feature, the maximum F OV is approximately 30 × 30 cm for a resolution of 3024 × 3024 pixels and a ground sampling distance (GSD) of 0.1 mm.
The point clouds are created by using a photogrammetric pipeline with structure from motion (SfM) of the open-source program Regard3D, cf. (Hiestand, 2019). In Table 1 the investigated setups are listed. The authors collected data from the three named cameras in chapter 3.1, but with the Fujifilm, images are collected twice, once in the sun -called Fuji sun -and once in the shadow -Fuji shadow. The images were taken with a distance between 300 and 1000 mm. Complete and dense photogrammetric results require many images with different camera poses. A main drawback of SfM is that it requires further sensors or reference points to obtain an absolute scale. For this work, the absolute dimensions of the plate were known to allow scaling the point cloud to 70 cm width. For real bridges, either distances of placed reference points, known structural dimensions, or absolute camera poses obtained by using a sensor platform with positioning sensors can be used.

Laser scanner
In contrast to cameras, laser scanners are independent of an external light source. The quality of the point cloud created with a mobile laser scanner depends on the scanner's resolution, scanning velocity, the accuracy of the INS, and object distance. The smallest detectable feature is also dependent on laser beamwidth and power. The used laser scanner has a beam power of approximately 200 mW with a beamwidth of about 1 mm in the scanning direction. Therefore, only part of the beam enters a 0.2 mm crack. To measure small crack widths of 0.2 mm, the authors scanned the concrete plate with the PPS-Plus at 3.2 m distance, a car speed below 1 m/s, and 800 profiles/s for a GSD of 1.25 mm in 3D and 0.078 mm in 2D. With the trajectory, recorded with the positioning system from Applanix, and the calibration parameters, the georeferenced point cloud, and the 2D-intensity image are processed.

Laser triangulation system
The laser triangulation system is partly independent of external light. If the ambient light is diffuse, the laser triangulation system works without difficulty. The system consists of a camera and a laser line generator. The camera and the laser unit are mounted in the predefined base distance. This base and the angle between the laser beam and the center axis of the camera define the maximum image field height. With the help of a CMOS sensor, the projected laser line is recorded. According to the position of the reflected laser on the CMOS sensor, 3D-points can be calculated. The measuring setup is shown in Figure 2.
For scanning a multidimensional object, the scanner has to be moved along the object. In our setup, the authors moved the concrete plate at ca. 500 mm/s on a lift truck. The laser is perpendicular to the plate with a distance of ca. 500 mm. The camera is positioned with a distance of ca. 600 mm to the object with an angle of ca. 33 • and a baseline of ca. 450 mm to the laser line. The laser beamwidth of 1 mm is similar to the width of the phase-shift laser scanner. However, the Ranger3 system calculates the center of the laser line per scan in each column using a sub-pixel method called Hi3D. This means that the depth information is given for each center pixel with a GSD of 0.1 mm due to a scanning frequency of 5000 Hz.   Figure 7 and Figure 8 respectively. The results of the laser triangulation system is depicted in Figure 9.

Test results and discussion
The first impression shows that the same concrete plate appears in different colors or intensities. For the RGB cameras, this is due to the different lighting conditions in sun and shadow or can be due to the different sensors' quantum efficiency. The photogrammetric point clouds have gaps which are dependent on the feature matching settings in Regard3D. Even the point cloud from the S8 smartphone camera contains the overall crack pattern and delivers, considering its low cost, a detailed result. The point clouds obtained by laser scanning and laser triangulation have the benefit of constant scanning without gaps.    For a more detailed view of the acquired data concerning crack detection, the authors measured multiple cracks with 0.2 mm width with a crack scale card. For each crack, the sections from the different sensor data are depicted in comparison in Figure 10 and Figure 11. The results are sorted with ascending quality. The 0.2 mm cracks in the PPS-Plus 3D-data are slightly visible but not sufficient for detection. On the contrary, all cracks are visible in the photogrammetric point clouds. Even with the smartphone camera, all cracks are visible. This leads to the assumption that an image-based crack detection algorithm will fulfil the requirements for crack detection. The images in shadow still allow identifying all cracks by a human. This will be important in case bridge inspection will take place on a cloudy day or in case the bridge or mobile inspection system casts a shadow on the bridge surface. Despite the bigger gaps in the Fuji sun data, some cracks are better visible which can  be due to created shadows within the cracks. The shadow is dependent on the crack's orientation and the direction of the sun. The results of the Manta G-419B are good in spite of the lower resolution and the manual focus. The best result is achieved by the SICK Ranger3 system. The crack pattern is sharp, contains no gaps, and the contrast of the intensity is high.
The next comparison is based on the density properties of the created point clouds. The results are depicted in Figure 12. The authors notice, that the local point density for photogrammetry is dependent on the camera poses and its distribution. Furthermore, the laser scanner and the laser triangulation system deliver constant densities as expected. Photogrammetry could offer similar results if the camera system is moved in the parallel direction to the plate's surface with constant distance between the images considering enough overlapping. Based on a required resolution of 0.1 mm to contain cracks with 0.2 mm width, the minimum point density should be 100 points/cm 2 . The laser scanner with a mean point density of 100 points/cm 2 is followed by photogrammetry with 300 to 500 points/cm 2 and the laser triangulation system with 3600 points/cm 2 .
Besides the point cloud density and intensity, the quality of the point cloud with respect to crack detection also contains the pure 3D-quality. The authors perform the first test by setting a threshold for the depth gradient, where the depth is defined perpendicular to the plate surface. The goal is not finding the best threshold value but comparing the results from different sensor systems as shown in Figure 13. The threshold value was set manually to 10 %. The user must consider that noise in point clouds can also result in higher gradients. The authors The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Volume XLIII-B2-2020, 2020 XXIV ISPRS Congress (2020 edition) Figure 10. Comparison of intensity point clouds for three different cracks (columns) in the following order from top to bottom: crack location, crack detail, PPS-Plus, S8, Fuji shadow, Fuji sun, G-419B, Ranger3. Figure 11. Comparison of intensity point clouds for three different cracks (columns) in the following order from top to bottom: crack location, crack detail, PPS-Plus, S8, Fuji shadow, Fuji sun, G-419B, Ranger3.
The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Volume XLIII-B2-2020, 2020 XXIV ISPRS Congress (2020 edition) This contribution has been peer-reviewed. https://doi.org/10.5194/isprs-archives-XLIII-B2-2020-1107-2020 | © Authors 2020. CC BY 4.0 License. notice that the laser triangulation system delivers the best result, followed by the different camera systems. The PPS-Plus still measures the 3D-data of the major cracks but not the cracks with a width of 0.2 mm. In both laser triangulation and photogrammetric data, cracks with a width of 0.2 mm are visible in the thresholded data, but the signal to noise ratio is low. This makes crack detection based on the 3D-data difficult, however, a fusion with intensity or RGB data can be possible.
Lastly, besides the evaluation based on the test data, the authors compare the different sensor options based on cost, weight, acquisition speed, processing time, need and effect of external light, and recording distance. The overview is given in Table 2.

FUTURE WORK
To facilitate the inspection process, mobile platforms could be used. One very popular kind of vehicle is a UAV. So in a first test, the authors used a DJI Phantom 4 UAV with its integrated camera to capture images from the concrete plate, like shown in Figure 14. Hereby, 27 images, containing no meadow to avoid help due to good texture of grass, were recorded. To determine and output the camera poses as illustrated in Figure 15, the authors used the freeware Meshroom, cf. (AliceVision, 2019) instead of Regard3D. As a benefit, the calculation of the dense point cloud, which can effort a long time, is not required for camera pose estimation. The results show that a concrete surface with cracks is sufficient to determine the camera poses via SfM. Moreover, despite the high wind conditions during the testing leading to a restless UAV, the images have low motion blur and still represent cracks with 0.2 mm width sharp enough. It can be shown, that most of the features are exactly the points of the cracks to be detected. Since the rest of the concrete plate has low texture, the probability of a crack being excluded from the point cloud is low. The result of the textured mesh using Meshroom is illustrated in Figure 16.
The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Volume XLIII-B2-2020, 2020 XXIV ISPRS Congress (2020 edition)   To get back from the test plate to a real bridge, the authors recorded a small railway bridge as shown in Figure 17 with the S8 smartphone camera. Figure 18 presents the photogrammetric results using Regard3D. The first findings are similar to the outcomes of (Popescu et al., 2019). The quality of the point cloud and the computation time is dependent on the settings for feature matching, densification, and the number of images.
Furthermore, for this small bridge, there are already areas with limited accessibility. As suggested in (Popescu et al., 2019), the combination of ground-based and aerial images can solve this problem unless there are too narrow areas that do not allow e.g. a UAV to access. Especially for the underside of the bridge, a UAV will improve accessibility with a camera or laser scanning system. Figure 17. Small railway bridge (left) and its underside (right). Figure 18. Point cloud of a small railway bridge created with Regard3D using Samsung S8 camera images.
For the chosen bridge, the point cloud contains fewer points at the underside as one can see through the bridge deck in Figure 18. The authors assume this is due to low texture based on good concrete conditions between the green structures visible in the right image of Figure 17. A big concrete surface with no cracks and very low texture could lead to problems using SfM as images that do not contain enough features could be excluded. This can be critical if there is only one crack in a surface that is otherwise in good condition.
In future work, the authors will develop an "autonomous mobile robotic monitoring system for large-scale structures" (Amy), cf. (Reiterer, 2019). It will be used for the inspection and monitoring of infrastructure elements. Amy will consist of a mobile platform (robot vehicle), various sensors (including cameras and laser scanners), and the corresponding data analysis software. The system will be modular and equipped with open interfaces. In the medium term, Amy will be interconnected with other systems. The collaboration with other robots, especially a UAV, will be in the focus of the development.
Moreover, the crack detection shall be automatized using image processing, classification, semantic segmentation based on neural networks, or an effective combination. Both existing and new annotated data sets shall be used. Subsequently, the detected cracks shall be transformed into a georeferenced point cloud or mesh representing the position and width of the cracks. By using a projection of the crack map onto an existing bridge model, used for BIM, further processing or simulation can be conducted.
Based on the photogrammetric results, containing gaps in point clouds and insufficient point cloud density dependent on texture features, the authors suggest that crack detection based on images is better than based on point cloud data. However, the 2Ddata do not contain absolute measures to determine the crack width. Here, the scaled 3D-data can help to determine absolute distance measures for pixels representing a crack in 2D-data. The 2D-data could be projected onto a 3D-mesh which was previously registered with the point cloud. This could be helpful if there is an already existing model of the bridge. An alternative would be using two cameras in a stereo configuration. The known baseline distance would result in the correct scale of the 3D-data and absolute measures to determine the point cloud.
In future work, the authors will work on creating a 3D crack map for bridges. When using a camera system, it will be crucial to know the camera poses of the images. The poses can be either derived by using SfM or by using additional information from the positioning system on the sensor platform. When thinking of combining images taken manually by inspectors and images taken by a UAV, the poses obtained by using both SfM and a positioning system can be combined or even fused.

CONCLUSION
In this work, the authors presented various sensor systems, which could be used for crack detection on concrete surfaces.