The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences
Publications Copernicus
Articles | Volume XLIII-B2-2022
30 May 2022
 | 30 May 2022


I. Asl Sabbaghian Hokmabadi and N. El-Sheimy

Keywords: Silhouette-based 3D Reconstruction, Probabilistic 3D Reconstruction, Dense Reconstruction, 3D Occupancy Grids, Virtual Bounding Box, Fisher’s Linear Discriminant Function

Abstract. Digital three-dimensional (3D) reconstruction of objects has many applications in computer vision, archaeology, and the entertainment industry. Digital 3D reconstruction can be used to preserve the appearance of valuable historical artifacts; it can be used to track the pose of an object in the images, and it can facilitate object modelling. 3D reconstruction of objects in the past has been achieved using many sensors such as cameras and laser-strip scanners. Monocular camera-based object 3D modelling can be categorized into sparse feature detector/descriptor-based and dense silhouette-based approaches. Feature-based methods identify distinctive features on the objects (captured from many images). In contrast, silhouette-based methods only require a distinguishable boundary between the object and the background. Silhouette-based methods have the advantage that in the controlled setups, a special background can be designed to be distinguishable from the object of interest; therefore, uniquely identifiable textures on the object’s surface are not required. Despite their advantages, silhouette-based probabilistic reconstruction remains a challenge. This article proposes a new probabilistic approach using 3D occupancy grids for the silhouette-based digital reconstruction of an object. The proposed method is designed to be usable with monocular cameras and achieves an accurate reconstruction using only sixteen images. Compared to similar silhouette-based volumetric approaches, the voxels are not discarded immediately during the reconstruction, and the occupancy grid mapping continuously changes the occupancy probability of the voxels with each new image included.