Volume XLII-2/W15
Int. Arch. Photogramm. Remote Sens. Spatial Inf. Sci., XLII-2/W15, 617–623, 2019
https://doi.org/10.5194/isprs-archives-XLII-2-W15-617-2019
© Author(s) 2019. This work is distributed under
the Creative Commons Attribution 4.0 License.
Int. Arch. Photogramm. Remote Sens. Spatial Inf. Sci., XLII-2/W15, 617–623, 2019
https://doi.org/10.5194/isprs-archives-XLII-2-W15-617-2019
© Author(s) 2019. This work is distributed under
the Creative Commons Attribution 4.0 License.

  23 Aug 2019

23 Aug 2019

CLASSIFICATION OF OIL PAINTING USING MACHINE LEARNING WITH VISUALIZED DEPTH INFORMATION

J. Kim1, J. Y. Jun1, M. Hong2, H. Shim1, and J. Ahn1 J. Kim et al.
  • 1Graduate School of Culture Technology Korea Advanced Institute of Science Technology (KAIST) Daejeon, Republic of Korea
  • 2Culture Technology Research Institute Korea Advanced Institute of Science Technology (KAIST) Daejeon, Republic of Korea

Keywords: Machine Learning, Visualized Depth Information, RTI, Painting Analysis, Artist Classification

Abstract. In the past few decades, a number of scholars studied painting classification based on image processing or computer vision technologies. Further, as the machine learning technology rapidly developed, painting classification using machine learning has been carried out. However, due to the lack of information about brushstrokes in the photograph, typical models cannot use more precise information of the painters painting style. We hypothesized that the visualized depth information of brushstroke is effective to improve the accuracy of the machine learning model for painting classification. This study proposes a new data utilization approach in machine learning with Reflectance Transformation Imaging (RTI) images, which maximizes the visualization of a three-dimensional shape of brushstrokes. Certain artist’s unique brushstrokes can be revealed in RTI images, which are difficult to obtain with regular photographs. If these new types of images are applied as data to train in with the machine learning model, classification would be conducted including not only the shape of the color but also the depth information. We used the Convolution Neural Network (CNN), a model optimized for image classification, using the VGG-16, ResNet-50, and DenseNet-121 architectures. We conducted a two-stage experiment using the works of two Korean artists. In the first experiment, we obtained a key part of the painting from RTI data and photographic data. In the second experiment on the second artists work, a larger quantity of data are acquired, and the whole part of the artwork was captured. The result showed that RTI-trained model brought higher accuracy than Non-RTI trained model. In this paper, we propose a method which uses machine learning and RTI technology to analyze and classify paintings more precisely to verify our hypothesis.