Volume XLII-2
Int. Arch. Photogramm. Remote Sens. Spatial Inf. Sci., XLII-2, 1149-1156, 2018
https://doi.org/10.5194/isprs-archives-XLII-2-1149-2018
© Author(s) 2018. This work is distributed under
the Creative Commons Attribution 4.0 License.
Int. Arch. Photogramm. Remote Sens. Spatial Inf. Sci., XLII-2, 1149-1156, 2018
https://doi.org/10.5194/isprs-archives-XLII-2-1149-2018
© Author(s) 2018. This work is distributed under
the Creative Commons Attribution 4.0 License.

  30 May 2018

30 May 2018

FOCUSING ON OUT-OF-FOCUS: ASSESSING DEFOCUS ESTIMATION ALGORITHMS FOR THE BENEFIT OF AUTOMATED IMAGE MASKING

G. J. Verhoeven G. J. Verhoeven
  • Ludwig Boltzmann Institute for Archaeological Prospection & Virtual Archaeology, Franz-Klein-Gasse 1, 1190 Vienna, Austria

Keywords: Automated masking, Defocus estimation, Depth of field, Edge extraction, Image-based modelling, Out-of-focus blur

Abstract. Acquiring photographs as input for an image-based modelling pipeline is less trivial than often assumed. Photographs should be correctly exposed, cover the subject sufficiently from all possible angles, have the required spatial resolution, be devoid of any motion blur, exhibit accurate focus and feature an adequate depth of field. The last four characteristics all determine the “sharpness” of an image and the photogrammetric, computer vision and hybrid photogrammetric computer vision communities all assume that the object to be modelled is depicted “acceptably” sharp throughout the whole image collection. Although none of these three fields has ever properly quantified “acceptably sharp”, it is more or less standard practice to mask those image portions that appear to be unsharp due to the limited depth of field around the plane of focus (whether this means blurry object parts or completely out-of-focus backgrounds). This paper will assess how well- or ill-suited defocus estimating algorithms are for automatically masking a series of photographs, since this could speed up modelling pipelines with many hundreds or thousands of photographs. To that end, the paper uses five different real-world datasets and compares the output of three state-of-the-art edge-based defocus estimators. Afterwards, critical comments and plans for the future finalise this paper.