AN INTEGRATED RAPID MAPPING SYSTEM FOR DISASTER MANAGEMENT

Natural disasters as well as major man made incidents are an increasingly serious threat for civil society. Effective, fast and coordinated disaster management crucially depends on the availability of a real-time situation picture of the affected area. However, in situ situation assessment from the ground is usually time-consuming and of limited effect, especially when dealing with large or inaccessible areas. A rapid mapping system based on aerial images can enable fast and effective assessment and analysis of medium to large scale disaster situations. This paper presents an integrated rapid mapping system that is particularly designed for real-time applications, where comparatively large areas have to be recorded in short time. The system includes a lightweight camera system suitable for UAV applications and a software tool for generating aerial maps from recorded sensor data within minutes after landing. The paper describes in particular which sensors are applied and how they are operated. Furthermore it outlines the procedure, how the aerial map is generated from image and additional gathered sensor data.


INTRODUCTION AND MOTIVATION
Humanitarian relief in case of major disasters is a global task.Several international organizations and institutes take specific actions for its coordination and execution.

GLOBAL ORGANIZATION
The Office for the Coordination of Humanitarian Affairs (OCHA 1 ) is part of the United Nations (UN) secretariat and is responsible for bringing together humanitarian actors to ensure a coherent response to emergencies, such as earthquakes, typhoons or conflicts.OCHA's mission is to mobilize and coordinate effective humanitarian action in partnership with national and international actors in order to alleviate human suffering in disasters and emergencies.Towards major disasters OCHA ascertain the humanitarian needs of the affected government and is coordinating international relief actions.Rapid-response tools are served as well such as the United Nations Disaster Assessment and Coordination (UNDAC 2 ) system and the International Search and Rescue Advisory Group (INSARAG 3 ).
INSARAG deals with urban search and rescue (USAR) related issues, aiming to establish minimum international standards for USAR teams and methodology for international coordination in earthquake response.UNDAC is part of the international emergency response system for sudden-onset emergencies and assessment teams can be deployed at short notice (12-48 hours) anywhere in the world.They are provided free of charge to disasteraffected countries.In case of sudden-onset emergency UNDAC installs an On-Site-Operations-Coordination-Center (OSOCC) and a Reception-Departure-Centre (RDC) for coordinating all following relief actions.Due to communication between international responders and Local Emergency Management Authorities (LEMA) all available information will be gathered to assess the current situation of disaster-affected areas.The Global Disaster Alert and Coordination System (GDACS4 ) is used to share these information.GDACS is a cooperation framework and it includes disaster managers and disaster information systems worldwide and aims at filling the information and coordination gap in the first phase after major disasters.GDACS provides real-time access to webbased disaster information systems and related coordination tools.I.S.A.R. Germany Stiftung GmbH5 is an INSARAG certified nonprofit organization which is able and allowed to shortly send first responder, search-and-rescue (SAR) and medical teams into disaster-affected areas.Furthermore, they manage to setup OSOCC and RDC and experienced in coordinating relief actions in the first phase after major disasters.

DISASTER ASSESSMENT
An effective coordination requires reliable and up-to-date information based on the assessment of disaster-affected areas.Based on the given information by LEMA additional technologies are used.Free access maps (e.g.Google Maps, OpenStreetMap, MapAction) will enable the identification of overtures and populated areas (Center for Satellite Based Crisis Information (ZKI), 2015).Those maps give a pre-disaster overview and are used for navigation issues too.The usage of latest satellite data depends on the availability of appropriate satellite imagery services, current weather conditions and obtainable infrastructure for accessing those maps on site (Voigt et al., 2007).In major scenarios helicopter systems are commonly used, although its operation is expensive and relies on existing infrastructure (Römer et al., 2013).
The assessment is often done by car, which is usually time consuming due to degraded infrastructure.In the last few years small UAV camera systems has been increasingly used for mapping and monitoring disasters-affected areas (Swiss Foundation for Mine Action (FSD), 2016a).Monitoring is often done by capturing single aerial images or video streams and transmitting the live-view directly to the ground.It is mainly used for assessing damages of roads and buildings (DuPlessis, 2016b, Alschner et al., 2016, Swiss Foundation for Mine Action (FSD), 2016b).Lightweight vertical take-off and landing (VTOL) UAVs (e.g.MikroKopter, AscTec Falcon8, DJI Phantom, DJI Inspire) allow a mean flight time of 20 minutes per battery and can be used to a range of up to 5km.This makes it an ideal tool for assessing close surroundings from the birds-eye of view (Meier, 2015, McFarland, 2015).

UAV-BASED MAPPING -STATE OF PLAY
Several software tools enable map or even 3D point cloud generation out of (more or less arbitrary) aerial imagery (e.g.Agisoft Photoscan, Pix4D, Capturing Reality).Their underlying structure from motion (SFM) process is relatively time and resource consuming, due to extensive image analysis and matching (Tomasi andKanade, 1992, Pollefeys et al., 1999).By the use of small fixed wing UAVs (e.g.SenseFly eBee, Trimble UX5) large areas can be covered but the time for post processing of several hours have to be considered (Swiss Foundation for Mine Action (FSD), 2016a).
Current assessment operations commonly use small rotorcraft UAVs due to their vertical take-off and landing capabilities.On the other hand, such UAVs have a rather limited operation time and cruise speed.As a result these systems are not capable of capturing larger areas within a short time.Fixed wing UAVs need flat space for landing which may be limited in e.g.destroyed urban or mountainous areas.But -in contrast to rotorcraft UAVs -they allow for mapping and monitoring larger surroundings.Since the last few years VTOL fixed wings UAVs are available (e.g.Quantum Systems, Germandrones, Wingcopter) which combine the strength of both concepts.
UAVs are an ideal tool for accessing remote areas but the latest solutions for generating maps are not fast enough to add value to tactical decision-making in the first stage of emergency response.This paper presents a camera system and post processing tool chain that enables both, capturing large areas and subsequent fast and automated generating high resolution interactive maps.It is a common development of German Aerospace Center (DLR) and I.S.A.R. Germany.By using a VTOL fixed wing UAV with a less than 10kg maximum take-off weight (MTOW 6 ) the complete setup forms a compact and portable system with the ability to capture large areas and deliver maps within an hour.Our aim is to provide scaled maps on a daily basis to enable the assessment of remote areas for identifying damages of populated areas and remaining overtures.By providing those maps to GDACS we hope to essentially improve and support international relief actions.

UAV CAMERA SYSTEM
The DLR MACS 7 UAV camera system basically consists of a geometric and radiometric calibrated industrial 16MPx camera with a high end GNSS 8 L1/L2 receiver in combination with an 6 maximum take off weight 7 modular airborne camera system; www.dlr.de/MACS 8 global navigation satellite system industrial grade inertial measurement unit (IMU) and an embedded computing board with exchangeable CFast storage module.The modular design is based on preceding aerial camera systems, developed by German Aerospace Center (DLR) (Lehmann et al., 2011).
The GNSS is laid out as a dual antenna system to improve the orientation accuracy and allow for very fast attitude initialization.Superior light sensitivity, fast readout electronics and data handling allow for short integration times and high frame rates which enables the system to be used on fast flying carriers.The complete camera system weighs about 1.4kg and can be used even in small UAVs with a MTOW of less than 10kg.The camera system is able to operate fully autonomously.The complete configuration settings is set up prior to the flight and contains, among other things, trigger frequency, exposure times, and optionally recording areas.After initial configuration the camera system operates without any further human interaction.Within the defined target area the system triggers the camera sensor and records all captured image data to the storage module.All subsidiary sensor data like GNSS/IMU is recorded too.The current set-up with a 64GB storage module allows recording up to about 2, 900 images or in terms of duration depending on trigger frequency a net recording time from 20 minutes up to about 60 minutes.The system is capable of using CFast storage modules of up to 512GB.
Camera and lens are designed for an operational flight altitude between 100 and 500 metres above ground (AGL 9 ) and a flight speed up to 120km/h.The corresponding ground sampling distance is then between about 1.5cm (swath width ∼72m) and 7.5cm (swath width ∼360m) per pixel.The maximum area that can be captured within a single flight depends on storage capacity and the overlapping between adjacent footprints, which in turn depends on flight altitude, flight speed, trigger frequency and distance between adjacent flight strips.At an assuming average image overlap of 60%, a 64GB storage module allows for capturing areas of up to 3.8km 2 at 100m altitude AGL, or rather up to about 92km 2 at 500m altitude AGL.
A core prerequisite for the here presented rapid mapping method is the real-time tagging of camera's position and orientation exactly at time of exposure to each recorded aerial image.For this 9 above ground level purpose, a hardware synchronization between camera sensor and GNSS/IMU enables accurate capture and fusion of the current GPS time, location and orientation of the camera system for every single aerial image.The real-time GNSS/IMU data in combination with the images serve as basis for immediate map generation directly after landing.

RAPID MAPPING TECHNOLOGY
After landing, the recorded images and sensor data can immediately used for generating a quick mosaic of the recorded area.

Intersection algorithm
The following prerequisites are required for the presented rapid mapping procedure: • elevation model of recorded target area (i.e.R 2 ⇒ R, specifying an elevation for every two-dimensional geographic coordinate within target area) • interior orientation of applied camera (i.e.primarily focal length and sensor size) • aerial images with position and orientation for every single aerial image • optionally: boresight axis and angle, specifying spatial relationship between camera sensor and IMU • optionally: radiometric correction model of applied camera • optionally: geometric correction model of applied camera The basic principle is based on spatial intersection of image rays (i.e.rays of its corresponding pinhole camera in space) and elevation model.For every single aerial image, its interior and exterior orientation and optionally boresight alignment determine exactly each pixel's ray in space (see figure 2).The intersection between the rays and elevation model can then be calculated either within a (geo-) spherical or a cartesian coordinate system.The latter one is given e.g. by aerial images' corresponding UTM 10 zone (Snyder, 1987).In general, derivations can not be determined for a given elevation model.Hence analytic solutions or numerical approaches 10 universal transversal mercator projection like newton method cannot be applied for determining a ray's intersection point with the elevation model.We therefore propose an iterative approach that will work with arbitrary models: Assume a ray R, starting at point S (i.e. the position of camera at time of exposure), and a particular direction, resulting from forementioned conditions (i.e.line of sight of a particular pixel).Ray R is then divided into equidistant sections defining sampling points {r0, r1, ...}, where r0 = S.The section length depends on a reasonable sampling distance, given by elevation model's spatial resolution.
Starting at point ri = r0, for each sampling point r ∈ {r0, r1, ...} do: where z(ri) is z component of ri (i.e.height of ri) • stop iterating, if ∆z(ri) < 0 (i.e.ri below height model, thus ray intersected elevation model) Assume iteration n fulfils abort criteria.Intersection point IR between ray R and elevation model is then given by ∆z-inverseweighted linear interpolation between the last two sample points rn−1 and rn.

PROJECTIVE MAPPING
In order to draw an aerial image onto a map (i.e. a geo-referenced coordinate system), we determine a projective mapping for every single image.This mapping consists of a homogeneous transformation matrix, that specifies a R 2 ⇒ R 2 mapping from image's pixel coordinate system to the four-sided footprint polygon and thus to the targeted geo-referenced coordinate system (i.e.map).The calculation of this 3x3 matrix basically derives from solution of linear system of equations, that is formed by relationship between the four corner coordinates in both coordinate systems (Heckbert, 1989).It finally defines, where to draw source pixels coming from source image into the map (Hartley and Andrew Zisserman, 2004).Such a projective mapping sample is shown in figure 4. The mapping quality can be enhanced additionally by radiometric and geometric correction of the aerial images (Weng et al., 1992, Kannala and Brandt, 2006, Kelcey and Lucieer, 2012).This requires appropriate calibration procedures of the applied camera.Whilst geometric correction increases the image-interior projection accuracy, a radiometric correction may in particular reduce vignetting effects and thus smoothing radiometric characteristics of the mosaic.

Accuracy
Despite the highly simplified projection, which is based just on the interpolation between the four projected polygon corners, within soft terrains the resulting mosaic provides a quite accurate aerial map (see figure 6).
However, the resulting aerial map may have projection errors.The following four points mainly influence the overall quality and position accuracy of the emerged mosaic: • Accuracy and resolution of applied elevation model.
• Projective errors and masking effects engendered by raised objects.This affects primarily projection results of manmade objects like buildings or towers, especially if not covered by elevation model.• Inaccuracies of position and proximity sensors.Roll and pitch angle errors may in particular significantly arouse positional projection inaccuracies.• Projection errors resulting from linear interpolation within single aerial image projection.
These sources of errors may lead to positional errors (horizontal shifts), masking effects and visual discontinuities (breaking edges) between adjacent projected images.The positional accuracy of the proposed approach has been evaluated in (Brauchle et al., 2017).It states a horizontal position error of less than 2 metres for the set-up with a standard utility aircraft and a flight altitude of 2, 600f t (approx.780m) above ground and comparable camera and sensors.The accuracy assessment for the set-up with a small UAV platform is going to be made within further tests and evaluations.

APPLICATION AND RESULTS
The presented rapid mapping system is going to be evaluated for search-and-rescue missions within an international rescue exercise ACHILLES 2017 leaded by the UN in Switzerland (NRZ, 2017).The VTOL fixed wing UAV Songbird of Germandrones11 is chosen as carrier platform (see figure 5).Since it provides a net operation time of about 60 minutes, supports payloads up to 1.4kg by an average cruise speeds of 80km/h, it fulfils all essential requirements set by rescue exercises' demands.It provides a fully autonomous operation and thus flights even beyond line of sight.In combination with the proposed DLR MACS aerial camera system several square kilometres can be covered quickly.In context of the preceding trainings and upcoming exercise, the camera systems was integrated into the UAV and several test flights were performed at training base Weeze12 on 25th of March, 2017.The altitude was limited to 250m due to law restrictions (BMVI, 2016).Table 1 summarizes the key figures of two sample flights.
After each performed flight, a quick mosaic of the captured area was generated.On average, the complete mosaic was available within about 20 minutes after landing, whereby the main part is taken by copying the aerial images.
The quick mosaic is rendered within a special developed software tool (see figure 6), which provides several GIS13 functionalities like e.g.meta data handling, measurement functions and exporting scaled maps for subsequent processing.2000).It provides near-global scale elevation data, can be used freely and with its size of approx.16GB it is comparatively lightweight even for mobile applications.
The mosaic of Flight 2 was exported as GeoTIFF image in full resolution within 12 minutes.As part of the exercise by I.S.A.R. Germany this data was used by the management for assessing the disaster-affected area.Damaged buildings, overtures, helicopter landing spots and suitable locations for the Base of Operation (BoO) were identified clearly.The mosaic was used as additional GIS layer for coordinating relief actions.Furthermore, geo-referenced maps and single images of potentially search-andrescue sites were added into Virtual OSOCC of GDACS for detailed on-site assessment.Finally the maps were printed and used on tablet computers (using PathAway App14 ) for navigating to these sites by assessment teams.

CONCLUSION AND OUTLOOK
The work presented here constitutes a first rapid mapping system for search-and-rescue teams.It has the potential to significantly change and improve disaster management and first response operations.
Nonetheless, several aspects may improve the overall systems performance.Current work focusses on enhancing the geometric projection onto the elevation model by using more than the four corner rays for intersection.This will in particular reduce projection errors within more complex structured regions like mountains or steep slopes.Another subject focusses on image matching and its application for reducing discontinuities between adjacent images and thus improving the overall position quality of the resulting aerial map.A long-term purpose is the extraction of 3D information of the captured area as fast as possible.This could avoid the need for a preliminary elevation model or even generate a higher resolution one.
Regarding hardware we evaluate different camera sensors, providing higher light sensitivity for low light or even night applications, or providing additional spectral bands for e.g.thermal visualization of treated areas (DuPlessis, 2016a).Finally we examine data link options, both for remote controlling and telemetry data of the camera system (narrow band links) as well as realtime image transmission (broadband links).Data links and its application for real-time aerial image transmission with standard utility aircraft operations were already surveyed in (Bucher et al., 2016).Based on these findings we will prove its integration into the present prototype.
Another aim is the optimization of image recording based on actual ground speed.On the whole, targeted overlap between consecutively captured images exactly defines the optimum trigger timing.Hence a ground-speed or respectively position-aware triggering could optimize the resulting vantage points.
Within ACHILLES 2017 we evaluate the prototype over three days in cooperation with additional actors to achieve further requirements.Our aim is to build up an appropriate solution which can be used by I.S.A.R. Germany for real disaster relief missions.

Figure 1 .
Figure 1.DLR MACS UAV camera system: Upper right: assembled prototype.Lower left: CAD model with UAV silhouette

Figure 2 .
Figure 2. Pinhole camera in space: Interior orientation (left side) and exterior orientation (right side) define rays of camera sensor.

Figure 3 .
Figure 3. Determination of intersection: gradual evaluation of vertical distance between sampling points rn and its elevation h(rn) leads to (approximated) intersection point IR

Figure 4 .
Figure 4. Projective mapping: aerial image and its perspective projection by application of corresponding transformation matrix

Figure 6 .Figure 7 .
Figure 6.Quick-mosaic of a captured area comprising about 800, 000m 2 .The visualized scene is rendered out of 1, 093 single aerial images (approx.20GB image data) with an average ground resolution of about 3.5cm.A detail view of the yellow rect section is shown in figure 7

Table 1 .
Key figures of performed test flights