Volume XLII-2/W12
Int. Arch. Photogramm. Remote Sens. Spatial Inf. Sci., XLII-2/W12, 149-154, 2019
https://doi.org/10.5194/isprs-archives-XLII-2-W12-149-2019
© Author(s) 2019. This work is distributed under
the Creative Commons Attribution 4.0 License.
Int. Arch. Photogramm. Remote Sens. Spatial Inf. Sci., XLII-2/W12, 149-154, 2019
https://doi.org/10.5194/isprs-archives-XLII-2-W12-149-2019
© Author(s) 2019. This work is distributed under
the Creative Commons Attribution 4.0 License.

  09 May 2019

09 May 2019

SYNTHETIC THERMAL BACKGROUND AND OBJECT TEXTURE GENERATION USING GEOMETRIC INFORMATION AND GAN

V. A. Mizginov1 and S. Y. Danilov1,2 V. A. Mizginov and S. Y. Danilov
  • 1State Res. Institute of Aviation Systems (GosNIIAS), 125319, 7, Victorenko str., Moscow, Russia
  • 2Moscow Institute of Physics and Technology (MIPT), Russia

Keywords: infrared images, augmented reality, object recognition, generative adversarial network

Abstract. Nowadays methods based on deep neural networks show the best performance among image recognition and object detection algorithms. Nevertheless, such methods require to have large databases of multispectral images of various objects to achieve state-of-the-art results. Therefore the dataset generation is one of the major challenges for the successful training of a deep neural network. However, infrared image datasets that are large enough for successful training of a deep neural network are not available in the public domain. Generation of synthetic datasets using 3D models of various scenes is a time-consuming method that requires long computation time and is not very realistic. This paper is focused on the development of the method for thermal image synthesis using a GAN (generative adversarial network). The aim of the presented work is to expand and complement the existing datasets of real thermal images. Today, deep convolutional networks are increasingly used for the goal of synthesizing various images. Recently a new generation of such algorithms commonly called GAN has become a promising tool for synthesizing images of various spectral ranges. These networks show effective results for image-to-image translations. While it is possible to generate a thermal texture for a single object, generation of environment textures is extremely difficult due to the presence of a large number of objects with different emission sources. The proposed method is based on a joint approach that uses 3D modeling and deep learning. Synthesis of background textures and objects textures is performed using a generative-adversarial neural network and semantic and geometric information about objects generated using 3D modeling. The developed approach significantly improves the realism of the synthetic images, especially in terms of the quality of background textures.