The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences
Download
Publications Copernicus
Download
Citation
Articles | Volume XLIII-B2-2020
Int. Arch. Photogramm. Remote Sens. Spatial Inf. Sci., XLIII-B2-2020, 777–783, 2020
https://doi.org/10.5194/isprs-archives-XLIII-B2-2020-777-2020
Int. Arch. Photogramm. Remote Sens. Spatial Inf. Sci., XLIII-B2-2020, 777–783, 2020
https://doi.org/10.5194/isprs-archives-XLIII-B2-2020-777-2020

  12 Aug 2020

12 Aug 2020

DENSE 3D OBJECT RECONSTRUCTION USING STRUCTURED-LIGHT SCANNER AND DEEP LEARNING

V. V. Kniaz1,2, V. A. Mizginov1, L. V. Grodzitkiy1, N. A. Fomin1, and V. A. Knyaz1,2 V. V. Kniaz et al.
  • 1State Res. Institute of Aviation Systems (GosNIIAS), 125319, 7, Victorenko str., Moscow, Russia
  • 2Moscow Institute of Physics and Technology (MIPT), Dolgoprudny, Russia

Keywords: 3D object reconstruction, structured light, optical flow, 3D scanner, deep neural networks

Abstract. Structured light scanners are intensively exploited in various applications such as non-destructive quality control at an assembly line, optical metrology, and cultural heritage documentation. While more than 20 companies develop commercially available structured light scanners, structured light technology accuracy has limitations for fast systems. Model surface discrepancies often present if the texture of the object has severe changes in brightness or reflective properties of its texture. The primary source of such discrepancies is errors in the stereo matching caused by complex surface texture. These errors result in ridge-like structures on the surface of the reconstructed 3D model. This paper is focused on the development of a deep neural network LineMatchGAN for error reduction in 3D models produced by a structured light scanner. We use the pix2pix model as a starting point for our research. The aim of our LineMatchGAN is a refinement of the rough optical flow A and generation of an error-free optical flow . We collected a dataset (which we term ZebraScan) consisting of 500 samples to train our LineMatchGAN model. Each sample includes image sequences (Sl, Sr), ground-truth optical flow B and a ground-truth 3D model. We evaluate our LineMatchGAN on a test split of our ZebraScan dataset that includes 50 samples. The evaluation proves that our LineMatchGAN improves the stereo matching accuracy (optical flow end point error, EPE) from 0.05 pixels to 0.01 pixels.