SENSOR-AND SCENE-GUIDED INTEGRATION OF TLS AND PHOTOGRAMMETRIC POINT CLOUDS FOR LANDSLIDE MONITORING

Terrestrial and airborne 3D imaging sensors are well-suited data acquisition systems for the area-wide monitoring of landslide activity. State-of-the-art surveying techniques, such as terrestrial laser scanning (TLS) and photogrammetry based on unmanned aerial vehicle (UAV) imagery or terrestrial acquisitions have advantages and limitations associated with their individual measurement principles. In this study we present an integration approach for 3D point clouds derived from these techniques, aiming at improving the topographic representation of landslide features while enabling a more accurate assessment of landslide-induced changes. Four expert-based rules involving local morphometric features computed from eigenvectors, elevation and the agreement of the individual point clouds, are used to choose within voxels of selectable size which sensor’s data to keep. Based on the integrated point clouds, digital surface models and shaded reliefs are computed. Using an image correlation technique, displacement vectors are finally derived from the multi-temporal shaded reliefs. All results show comparable patterns of landslide movement rates and directions. However, depending on the applied integration rule, differences in spatial coverage and correlation strength emerge.


INTRODUCTION
Landslide monitoring is an important means to assess landslide activity and to prevent potential impacts on residential structures.For this task, various techniques have been proposed, suitable for quantifying landslide displacements at characteristic points, along profiles and area-wide.Particularly, terrestrial and airborne 3D imaging sensors can provide area-wide information about surface characteristics and changes over time.However, optical sensors and their various measurement principles come along with specific characteristics (such as spatial, temporal, spectral and radiometric resolution), pros and cons that make them more or less eligible for a defined monitoring task.A combination of different platform and sensor configurations may overcome limitations of individual devices, thus making the integration of the resulting 3D point clouds a current research topic.
Approaches for the integration of geospatial data have been developed in various disciplines for different purposes, such as the guiding of autonomous vehicles, intelligent robotics, heritage documentation, etc. (Hall and Llinas, 1997;Luo et al., 2011;Ramos and Remondino, 2015).What they have in common is the aim to utilize advantages of different sensors to overcome limitations of single data sources.Also in the field of Earth observation and geoinformatics, data integration (or sensor integration) is understood as the combination of data acquired with various sensors to derive information which cannot be deduced from one sensor alone (Pohl and Genderen, 1998;Zhang, 2010;Petrasova et al., 2017).Previous studies presented approaches for integrating data from different active or passive sensors or a combination of both, * Corresponding author: thomas.zieher@oeaw.ac.at mainly to enhance the accuracy of classifiers (e.g.Solberg et al., 1994;Hu et al., 2014).
Data integration can be performed on three levels: (i) the raw data level (pixels in case of raster data or points in case of 3D point clouds), (ii) the feature level and (iii) the decision level.The preferred data integration level depends on the purpose of the data acquisition and the considered sensors.To integrate data on the raw data level, the considered sensors must provide similar types of data (i.e.measurements of the same physical entity).Otherwise, the data must be integrated based on either extracted features (feature level) or decisions inferred from the single sensors (decision level) (Solberg et al., 1994;Hall and Llinas, 1997).The integration of 3D point clouds derived from different sensors could be accomplished on each of these levels.However, for the detection of landslide-induced changes based on multi-temporal 3D point clouds, the raw data level should be preferred, because extractable features or inferred decisions may change over time and could not be comparable.This work aims to present a multi-sensor and multi-temporal data integration approach, performed on a common primary data level (i.e. the 3D coordinates derived from different sensors), suitable for the area-wide assessment of landslide displacements.In this study, point clouds are derived from (i) terrestrial laser scanning (TLS) and photogrammetry using imagery acquired from (ii) an unmanned aerial vehicle (UAV) platform and (iii) terrestrial acquisitions i.e. terrestrial photogrammetry (TerrPh).Particular attention is paid on the representation of topography of the individual sensors and the associated potential for assessing landslideinduced changes.Further attributes provided by the considered 3D imaging techniques (e.g.RGB colour-coding and target re- The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Volume XLII-2, 2018 ISPRS TC II Mid-term Symposium "Towards Photogrammetry 2020", 4-7 June 2018, Riva del Garda, Italy flectance) were not exploited in this experiment.Spatial patterns of landslide displacements are assessed area-wide based on derivatives of integrated point clouds which exploit the inherent properties of the applied acquisition techniques.

STUDY AREA
The investigated landslide is located near the town of Corvara in Badia, in the Autonomous Province of Bolzano-South Tyrol (Italy; Fig. 1).It repeatedly caused damages to the national road located on top of the landslide body and potentially threatens buildings near its toe.Following the landslide classification scheme of Cruden and Varnes (1996), the Corvara landslide is a rotational earth slideearth flow.It has been subject to several scientific publications in the past (Soldati et al., 2004;Corsini et al., 2005;Schädler et al., 2015;Thiebes et al., 2016;Schlögel et al., 2017b).The landslide extends from approximately 1.550 to 2.100 m, affecting an area of about 1.7 km 2 .It involves Triassic marls, clayey shales and sandstones of the La Valle and San Cassiano units.Results of periodical and permanent measurements using a differential global navigation satellite system (DGNSS) indicate movement rates of more than 20 m per year in the most active part of the landslide (Thiebes et al., 2016;Schlögel et al., 2017a).Phases of enhanced landslide activity have been related to periods of enhanced snow melt and/or prolonged rainfalls (Corsini et al., 2005;Schädler et al., 2015).Most prominent landslide-induced changes between the data acquisitions were encountered in the north-western part of the area of interest of the data acquisition (A-AOI, approx.500 × 300 m; Fig. 1).Therefore, the data integration and subsequent displacement analysis focus on this part of the landslide (I-AOI, approx.130 × 100 m; Fig. 1 and 2).

MATERIALS AND METHODS
To explore the potential of integrating data of different sensors and platforms for quantifying landslide displacements an experiment was carried out within a selected area (A-AOI) in the active upper part of the landslide (Fig. 1 and 2).The data were collected in June 2016 and 2017 with TLS, UAV-based and TerrPh (Table 1).Each sensor considered in the experiment comes along with individual acquisition principles associated with advantages and disadvantages (Fig. 3).A schematic diagram showing the

Photogrammetric point clouds
Photogrammetrically derived point clouds based on imagery acquired with a UAV typically provide area-wide coverage and are influenced by many effects that impact on the accuracy of the triangulated 3D points, e.g. the geometry of the camera network configuration and errors in image orientation and multi-stereo matching.Given a set of overlapping images and of homologous points, automatically or manually identified in the different views, the exterior orientation parameters of the images, interior parameters of the camera and 3D object coordinates of the feature points are automatically computed within a robust bundle block adjustment.Dense image matching methods are then applied to determine pixel-based correspondence information, thereby deriving dense point clouds.The same holds true for point clouds derived by TerrPh, although this technique usually provides for an increased spatial resolution on the one hand and a reduced spatial coverage on the other, if compared to airborne/UAV photogrammetry.As a proxy for the geometric quality of the resulting points, the number of images a point is seen in and triangulated from (i.e. the number of stereo models that observe each point) has been used in previous studies (e.g.Mandlburger et al., 2017).For our experiments, after an image orientation performed in Pix4Dmapper (Pix4D, 2018), photogrammetric point clouds were generated based on the SURE workflow (Wenzel et al., 2013;nFrames, 2018).The resulting point clouds feature a mean spatial resolution of about 2.0 cm in case of the UAV-based datasets and approximately 0.5 cm for the terrestrial datasets.

TLS point clouds
In contrast to photogrammetric techniques, active 3D imaging techniques such as TLS generate point clouds by time of flight measurements of emitted electromagnetic pulses.The resulting coverage depends on the characteristics of the study area (e.g.topography, land cover) and the field setup (e.g.number and position of viewpoints).However, TLS can penetrate vegetation and even deliver information of the ground below.TLS point clouds were acquired using a Riegl VZ6000 laser scanner.The scanner operates with a wavelength of 1064 nm and scanning frequencies were set to 300 kHz in June 2016 and 150 kHz in June 2017.The laser's beam divergence of 0.12 mrad leads to a theoretical footprint size of 1.2 cm at a distance of 100 m.The column and line resolutions were kept between 0.002-0.005deg and 0.003-0.01deg respectively, resulting in individual point clouds with up to 300 million points and an average point spacing of approximately 2.0 cm.

Georeferencing and accuracy assessment
For georeferencing the photogrammetrically derived point clouds, ground control points (GCP) were placed evenly in and  around the study area and were measured with a differential global navigation satellite system (DGNSS).A GCP-supported bundle adjustment was then performed, followed by an accuracy assessment based on RMSE computed on check points (CPs).In case of the point clouds acquired by TLS, five spherical targets with known radius were distributed within the study area.Their position was also measured with a DGNSS and subsequently used to transform the point clouds.The accuracy of the TLS point clouds was assessed by computing the RMSE of point-to-plane distances on planar areas where the single scans overlap.Furthermore, the RMSE for fitting the reconstructed spheres' centres to the respective coordinates measured by DGNSS were computed.To minimize multi-sensor and multi-epoch data integration errors, a fine registration using the iterative closest point (ICP) algorithm (Lichti and Skaloud, 2010) was performed on well-distributed stable areas (roofs and bare soil surfaces) with the UAV-based point cloud acquired in 2016 acting as master data set.Again, RMSE of point cloud registration was adopted as accuracy measure.Results of the accuracy estimation for the data acquisitions in June 2016 and 2017 are listed in Table 2.All datasets were transformed to the ETRS89/UTM zone 32N projection (EPSG: 25832) with heights above the GRS80 ellipsoid.

Quality control
Sensor-specific quality measures were introduced to reject single points which could feature low positional accuracy.In case of the photogrammetrically derived point clouds, a minimum number of 4 stereo models (i.e. 5 images) for a point to be considered valid during triangulation, was used.The resulting point clouds were then filtered based on the SURE approach presented in Rothermel et al. (2016).For the point clouds derived from TLS a maximum pulse deviation threshold of 100 was introduced to reduce the positional uncertainty associated with echoes generated from large footprints.The pulse deviation is the amplitude difference of a digitized full waveform echo compared to a device-specific, statistically fitted echo form (Pfennigbauer and Ullrich, 2010).By applying the selected threshold 10% of the TLS points were rejected.

Data management and visualization
For data management, a virtual point cloud based on the Extensible Markup Language (XML) was set up, allowing to extract the individual datasets by addressing a sensor and an epoch identifier.
To guarantee an efficient integration and management of the data, tiles of 50 × 50 m and an additional overlap of 5 m (60 × 60 m final size of tiles) were generated with the point clouds of each sensor and the two epochs (Fig. 5).The data management was performed with SAGA GIS (Conrad et al., 2015).

Data integration
With regards to the data integration step, a voxel-based approach was chosen (Fig. 6a).Given a sufficient number of points (n = 20) of the different sensors within a voxel of adjustable size sv, points were evaluated according to defined rules.These rules include local morphometric features based on eigenvectors (i.e.geometric curvature and planarity), which were computed for each point within a spherical neighbourhood of a defined radius r of 0.2 m (Fig. 6b).The geometric curvature ci at point i represents the deviation of the normal vectors (estimated from the eigenvector with the smallest eigenvalue) within a defined neighbourhood and is defined as where k is the number of points within the considered neighbourhood, ni is the normal vector derived for point i and nj is the normal vector at each point j within the considered neighbourhood.
The planarity pi at point i is a measure of surface roughness and is defined as where ei1 is the longest and ei2 is the second longest eigenvector in the considered neighbourhood of point i.Four rules were tested for the data integration: 1. 'Maximum geometric curvature': data of the sensor with significantly higher geometric curvature are kept; 2. 'Maximum planarity': data of the sensor with significantly higher planarity are kept; 3. 'Minimum elevation': data of the sensor located significantly below the other sensors' data are kept; 4. 'Sensor correspondence': only data within voxels are kept where two or more sensors are represented by at least 20 points.
Each rule is based on a hypothesis aiming at improving the quantification of landslide-induced changes of topography.The rule for considering points featuring a higher geometric curvature (rule 1) aims at enhancing the geometric representation of morphological features characterized by abrupt changes of the slope angle (e.g.edges of soil clods, geomorphological break lines).Such features could be used to estimate the landslide's activity by quantifying their displacement over time.By exploiting the planarity feature (rule 2), points that represent smooth surfaces will be preferred.Thus, noise may be reduced and changes at such areas could be quantified more accurately.By considering the respectively lowermost points in each voxel (rule 3), the actual terrain may be approximated particularly in areas covered by low vegetation.Thus, detected changes may better reflect changes of the topography instead of changes within vegetation.By only considering spatially close points acquired by two or more sensors (rule 4), the resulting point cloud comprises data confirmed from different sources.Outliers of single sensors are omitted and the changes derived from the integrated datasets could be considered more robust compared to the single sensors.To decide which data to keep in each voxel in case of rules 1, 2 and 3, a nonparametric median test (Kruskal Wallis H test) was performed, assuming a significance level of 5%.If there are no significant differences between the sensors' data within a voxel, no integration decision is taken and all points are kept.The data integration tool was implemented in the Python 2.7 programming language and tested with four voxel sizes (sv = 0.1, 0.2, 0.5, 1.0 m) for each of the four integration rules.The results of the four rules are then used to assess changes between the data acquisition in 2016 and 2017.For this purpose, digital surface models (DSM) with a spatial resolution of 5 cm are derived from the integrated point clouds.From the DSMs shaded reliefs are computed using the ambient occlusion algorithm proposed by Tarini et al. (2006).In this algorithm multiple illumination directions from the northern sectors are considered to optimize the interpretability of the shaded relief by omitting cast shadows and enhancing the contrast.Based on the shaded reliefs, landslide-induced changes are finally assessed using an image correlation technique (IMCORR; Scambos et al., 1992) implemented in SAGA GIS.

RESULTS
The total number of voxels depends on the chosen voxel size and the extent of the data.For the 2016 acquisitions, 598.6 mil.points occupy 2.3 mil.voxels (sv = 0.2 m) within the I-AOI (Table 3).
For the 2017 acquisitions, a total of 604.tion because of their area-wide coverage, partly without overlapping data of the other sensors.
Figure 9 (data of 2016) and 10 (data of 2017) show the spatial distribution of the decisions on which sensor's points to keep in the voxels of size 0.2 m for the four applied integration rules.In case of the 'maximum geometric curvature' rule (Fig. 9a, Fig. 10a), the TLS points are preferred over UAV data on convex landforms.Generally, TerrPh points are preferred over the other sensors within its smaller acquisition area (See Fig. 5).Considering the 'maximum planarity' rule, mainly photogrammetric points are kept within planar areas (e.g.bare soil).Compared to the results of 2016 (Fig. 9b), distinctly more integration decisions are taken in favour of the UAV points in the data of 2017 (Fig. 10b).In the resulting point cloud after applying the 'minimum elevation' rule, UAV points are generally preferred over TLS and TerrPh data (Fig. 9c and 10c).Applying the 'sensor correspondence' rule leads to data gaps, particularly where on- ly UAV points are present without coverage of TLS (Fig. 9d and  10d).Within the area covered by TerrPh most points are kept, partly with all three sensors in agreement.
An exemplary point cloud profile (location shown in Fig. 9) demonstrates the effects of the applied integration rules (Fig. 11).
The profile shows vegetated soil clods which are gradually moving down along the concave sliding surface.In the uppermost profile (a) all acquired points from TLS and UAV are included.Profile (b) shows the remaining points after applying the 'maximum geometric curvature' rule.Along convex forms mainly TLS points kept.Profile (c) shows the points resulting from the 'maximum planarity' rule.On planar areas mostly UAV points are kept.Profile (d) shows the points resulting from the 'minimum elevation' rule.In case of significant differences in elevation, the lowermost points are kept.The last profile (e) shows the results of the 'sensor correspondence' rule.Data gaps emerge, where the sensor's point clouds disagree or fall below the defined minimum density of 20 points per voxel and sensor.
Figure 12 shows classified 2D displacement vectors derived from the image correlation technique.All results indicate a movement with a median of 0.42 m in south-west direction within one year.The area around the active part does not show consistent displacement patterns considering a minimum displacement rate of 0.25 m.In the south-eastern part of the affected area enhanced movement of up to 1.5 m in north-west direction is evident.Probably erroneous correlations are found particularly in the north-eastern part inside and outside the I-AOI.Minor differences emerge between the spatial coverage of the displacement vectors when comparing the results of the integration rules 'maximum geometric curvature' (Fig. 12a), 'maximum planari- ty' (Fig. 12b) and 'minimum elevation' (Fig. 12c).The results of the 'sensor correspondence' rule (Fig. 12d) show data gaps and erroneous correlations in those areas where the applied imaging techniques did not support each other.
A detailed view of the area with distinctly higher displacement rates is presented in Figure 13.All results show similar patterns of direction and magnitude of the displacement.However, minor differences between the spatial coverage of the derived displacement vectors are noticeable.The results based on the rules 'maximum geometric curvature' (Fig. 13a), 'maximum planarity' rule (Fig. 13b) and 'elevation rule' (Fig. 13c) feature slightly different vector patterns which are likely related to the different topographic representation of the landslide features.The most complete spatial coverage is provided by the results of the 'sensor correspondence' rule (Fig. 13d), at the cost of more potentially erroneous correlations in the south-eastern part, outside the active area of the landslide.

DISCUSSION
The proposed integration approach strives at improving the topographic representation of landslide features to enhance the quality of the assessment of landslide-induced changes.Since no reference data was available, only a qualitative comparison of the results was possible.Hence, the performance of the applied integration rules cannot be quantified with the current setup.
The results after applying the 'maximum geometric curvature' rule suggest that convex features are better described by TLS.This may also indicate that in photogrammetric point clouds such features tend to be represented more smoothly.This behaviour is also confirmed by the results of the 'maximum planarity' rule, where on planar areas mostly photogrammetric points are kept.Applying the 'minimum elevation' rule, mostly UAV points are kept.This could result from the different looking angle of the acquisition techniques.The lateral looking angle of the terrestrial photographs and the low incidence angle of TLS may not provide measurements of the terrain itself or close to the terrain.This could be improved using airborne or UAV-based laser scanning systems.The integrated point cloud the 'sensor correspondence' rule results in holes where the applied imaging techniques do not support each other.However, the remaining data may be of higher quality since they are confirmed by the different techniques compared.
The general patterns of landslide displacement magnitude and direction derived from the integrated point clouds are in agreement.However, each integrated point cloud based on the individually tested rules focusses on different characteristics.Resulting variations regarding the spatial coverage of the derived displacement vectors may be related to the different topographic representation of the landslide features.However, further investigations are necessary to quantify and validate these effects.
The implemented integration approach considers points within a voxel when applying expert-based rules for the data selection without considering an adaptive local neighbourhood.Such 'hard borders' could lead to data artefacts and inconsistencies.Such effects could be resolved by extending the neighbourhood and introducing a distance-dependent weighting factor when making the decision of which data to keep.Further improvement of the integration approach could involve individual integration rules for different land cover types, which could be classified based be exploited to optimize the respective representation in the data.
Beyond that, a succession of integration rules could be applied to further improve the topographic properties of the integrated point clouds.

CONCLUSIONS
The integration approach presented in this study applies expertbased rules to improve the topographic representation of point clouds acquired with different 3D imaging techniques.Based on the resulting integrated point clouds, shaded reliefs were computed to assess landslide-induced changes using an image correlation technique.
It was shown that depending on the applied integration rule the assessment of landslide displacements based on an image correlation technique can vary and may have different implications.
Depending on the predominant topographic features of a landslide, a different integration rule may lead to a better spatial coverage of the resulting landslide displacement vectors, but might also provide more accurate results in terms of displacement magnitude and direction.
The proposed integration technique was tested in a small area of interest of a highly active deep-seated landslide.The approach could also be scaled to the whole landslide, given the suitable 3D imaging platforms (e.g.airborne/terrestrial laser scanning and airborne photogrammetry).Depending on the form and scale of the desired landslide features, the voxel size and neighbourhood size for eigenvector calculation should be adapted accordingly.
If reference data will be collected in the field (e.g. by means of systematic DGNSS or tachymeter measurements), the integration rules can be trained and validated in future.Further improvement of the integration approach and a quantitative validation of the derived displacement vectors may help to increase the efficiency of multi-sensor landslide monitoring in future.

Figure 1 .
Figure 1.Location of the landslide near Corvara (South Tyrol, Italy) and the two areas of interest for data acquisition (A-AOI; red) and data integration (I-AOI; green).

Figure 2 .
Figure 2. Photo of the area of interest (I-AOI) showing morphological features of the landslide.main processing steps adopted in this study is visualized in Figure 4.

Figure 4 .
Figure 4. Workflow showing the main processing steps.

Figure 5 .
Figure 5. Setup of the two field campaigns, showing the coverage of the UAV flights in 2016 (a) and 2017 (d), TLS acquisitions in 2016 (b) and 2017 (e) and TerrPh acquisitions in 2016 (c) and 2017 (f).Superimposed, the tiling system is shown.

Figure 6 .
Figure 6.Sketches of the voxel-based integration approach with cubic voxels of size sv (a) and the computation of eigenvectors within a defined spherical neighbourhood of radius r (b).

Figure 9 .
Figure 9. Spatial distribution of the integration decisions for the data of 2016 (sv = 0.2 m) for the applied integration rules 'maximum geometric curvature' (a), 'maximum planarity' (b), 'minimum elevation' (c), and 'sensor correspondence' (d).The marked point cloud profile is shown in Fig. 11.

Figure 11 .
Figure 11.Point cloud profile of the data acquired in 2016 (3 m wide; location indicated in Fig. 9) showing examples of the merged point cloud (a) and results after applying the integration rules 'maximum geometric curvature' (b), 'maximum planarity' (c), 'minimum elevation' (d) and 'sensor correspondence' (e).

Table 1 .
Sensor and dataset specifications for the UAV and terrestrial data acquisition in June 2016 and June 2017.

Table 2 .
Accuracy assessment for the data acquisitions in June 2016 and 2017, expressed as: single standard deviation of DGNSS measurements, RMSE (XY/Z) [m] on CPs after image triangulation, RMSE [m] of point-to-plane distances of the single laser scans/RMSE [m] for fitting the reconstructed spheres' centres to the respective coordinates measured by DGNSS, and RMSE [m] of multi-epoch point clouds registration.

Table 3 .
Number of points (millions)acquired by the employed 3D acquisition techniques within the two areas of interest.