Volume XLII-2
Int. Arch. Photogramm. Remote Sens. Spatial Inf. Sci., XLII-2, 845-852, 2018
https://doi.org/10.5194/isprs-archives-XLII-2-845-2018
© Author(s) 2018. This work is distributed under
the Creative Commons Attribution 4.0 License.
Int. Arch. Photogramm. Remote Sens. Spatial Inf. Sci., XLII-2, 845-852, 2018
https://doi.org/10.5194/isprs-archives-XLII-2-845-2018
© Author(s) 2018. This work is distributed under
the Creative Commons Attribution 4.0 License.

  30 May 2018

30 May 2018

A COMPARISON OF TWO STRATEGIES FOR AVOIDING NEGATIVE TRANSFER IN DOMAIN ADAPTATION BASED ON LOGISTIC REGRESSION

A. Paul1, K. Vogt2, F. Rottensteiner1, J. Ostermann2, and C. Heipke1 A. Paul et al.
  • 1Institute of Photogrammetry and GeoInformation, Leibniz Universität Hannover, Germany
  • 2Institut für Informationsverarbeitung, Leibniz Universität Hannover, Germany

Keywords: Transfer learning, Domain Adaptation, Negative Transfer, Remote Sensing

Abstract. In this paper we deal with the problem of measuring the similarity between training and tests datasets in the context of transfer learning (TL) for image classification. TL tries to transfer knowledge from a source domain, where labelled training samples are abundant but the data may follow a different distribution, to a target domain, where labelled training samples are scarce or even unavailable, assuming that the domains are related. Thus, the requirements w.r.t. the availability of labelled training samples in the target domain are reduced. In particular, if no labelled target data are available, it is inherently difficult to find a robust measure of relatedness between the source and target domains. This is of crucial importance for the performance of TL, because the knowledge transfer between unrelated data may lead to negative transfer, i.e. to a decrease of classification performance after transfer. We address the problem of measuring the relatedness between source and target datasets and investigate three different strategies to predict and, consequently, to avoid negative transfer in this paper. The first strategy is based on circular validation. The second strategy relies on the Maximum Mean Discrepancy (MMD) similarity metric, whereas the third one is an extension of MMD which incorporates the knowledge about the class labels in the source domain. Our method is evaluated using two different benchmark datasets. The experiments highlight the strengths and weaknesses of the investigated methods. We also show that it is possible to reduce the amount of negative transfer using these strategies for a TL method and to generate a consistent performance improvement over the whole dataset.