Volume XL-7/W2
Int. Arch. Photogramm. Remote Sens. Spatial Inf. Sci., XL-7/W2, 201-206, 2013
https://doi.org/10.5194/isprsarchives-XL-7-W2-201-2013
© Author(s) 2013. This work is distributed under
the Creative Commons Attribution 3.0 License.
Int. Arch. Photogramm. Remote Sens. Spatial Inf. Sci., XL-7/W2, 201-206, 2013
https://doi.org/10.5194/isprsarchives-XL-7-W2-201-2013
© Author(s) 2013. This work is distributed under
the Creative Commons Attribution 3.0 License.

  29 Oct 2013

29 Oct 2013

An Interative Grahical User Interface for Maritime Security Services

T. Reize, R. Müller, and R. Kiefl T. Reize et al.
  • German Aerospace Center (DLR), Earth Observation Center (EOC), Oberfpaffenhofen, Germany

Keywords: Graphical User Interface (GUI), NASA Worldwind, Geo-Mapping, Geographical Information Systems (GIS), Earth Observation (EO) Data Visualization

Abstract. In order to analyse optical satellite images for maritime security issues in Near-Real-Time (NRT) an interactive graphical user interface (GUI) based on NASA World Wind was developed and is presented in this article. Targets or activities can be detected, measured and classified with this tool simply and quickly.

The service uses optical satellite images, currently taken from 6 sensors: Worldview-1 and Worldview-2, Ikonos, Quickbird, GeoEye-1 and EROS-B. The GUI can also handle SAR-images, air-borne images or UAV images. Software configurations are provided in a job-order file and thus all preparation tasks, such as image installation are performed fully automatically. The imagery can be overlaid with vessels derived by an automatic detection processor. These potential vessel layers can be zoomed in by a single click and sorted with an adapted method. Further object properties, such as vessel type or confidence level of identification, can be added by the operator manually. The heading angle can be refined by dragging the vessel's head or switching it to 180° with a single click. Further vessels or other relevant objects can be added. The objects length, width, heading and position are calculated automatically from three clicks on top, bottom and an arbitrary point at one of the object’s longer side. In case of an Activity Detection, the detected objects can be grouped in area of interests (AOI) and classified, according to the ordered activities. All relevant information is finally written to an exchange file, after quality control and necessary correction procedures are performed. If required, image thumbnails can be cut around objects or around whole areas of interest and saved as separated, geo-referenced images.