ABSTRACT
Domain adaptation aims to transfer knowledge from the labeled source domain to an unlabeled target domain that follows a similar but different distribution. Recently, adversarial-based methods have achieved remarkable success due to the excellent performance of domain-invariant feature presentation learning. However, the adversarial methods learn the transferability at the expense of the discriminability in feature representation, leading to low generalization to the target domain. To this end, we propose a Multi-view Feature Learning method for the Over-penalty in Adversarial Domain Adaptation. Specifically, multi-view representation learning is proposed to enrich the discriminative information contained in domain-invariant feature representation, which will counter the over-penalty for discriminability in adversarial training. Besides, the class distribution in the intra-domain is proposed to replace that in the inter-domain to capture more discriminative information in the learning of transferrable features. Extensive experiments show that our method can improve the discriminability while maintaining transferability and exceeds the most advanced methods in the domain adaptation benchmark datasets.
1. INTRODUCTION
Deep learning has achieved great success in a variety of computer vision tasks [1] [2], but such success mainly depends on a large amount of labeled training data and the i.i.d. assumption. It is often difficult to meet in real-world applications. For example, an image classification model trained by simulated images cannot be directly applied to real image classification due to the distribution divergence. Even the same object, owing to the acquisition equipment, lighting, angle, and other factors, may display a diversity of visual features and result in a different distribution. To address these challenges, Domain Adaptation (DA) has been proposed, which tries to mitigate the distribution differences between domains and utilizes the knowledge learned from a related label-rich source domain to assist the learning task in an unlabeled target domain.
The deep learning models have dominated this field owing to their outstanding performance in the learning of transferable features. And these methods fall into two categories broadly: discrepancy-based methods [1], [4], [5] and adversarial-based methods [6], [7], [8], [9], [10], [11], [12], [13]. The former mitigates the distribution discrepancy between the source and target domain by minimizing the discrepancy metric, such as maximum mean discrepancy (MMD) [4]. While the latter is inspired by generative adversarial networks [14], it introduces a new component domain discriminator to realize domain confusion. The adversarial learning model is an effective mechanism for learning invariant features in domain adaptation and has become an increasingly popular method.
There are two key factors in domain adaptation, transferability and discriminability. Transferability depends on the similarity between two domains, ensuring that the model trained on the source domain can be used on the target domain. While discriminability indicates the ability of learned features to separate different classes. However, the existing adversarial methods focus more on transferability while rare attention has been paid to discriminability in feature representations. It will lead to performance degradation in the target domain. There are two main reasons for the lower discriminability in representation.
Recent studies [7] point out that adversarial methods improve transferability at the expense of discriminability. And there is a contradiction between transferability and discriminability. Especially, when learning domain invariant features, the eigenvector with the largest singular value tends to carry more transferable knowledge, while other eigenvectors with low singular values may embody domain variations and are weakened. This leads to the over-penalty of other eigenvectors, which may be crucial for discriminability.
Most of the previous methods measure the distance of inter-classes and intra-classes between two domains, and the inaccurate pseudo-alignment will lead to the deviation of distance.
To address these issues above, we propose a Multi-view Feature Learning method for the Over-penalty in Adversarial Domain Adaptation, using multi-view representation to learn more discriminative and transferable information to counter the over-penalty for discriminative information in adversarial learning. In addition, to learn more discriminative features, the class-distribution in the source domain is used to replace that in the inter-domain to measure the class-distance more accurately.
The contributions of our work can be summarized as follows.
– A multi-view learning framework is proposed to enlarge both the transferable and discriminative feature representation. Multi-view representations contain diverse and complementary information to resist the over-penalizing of discriminative information in adversarial learning.
– To further improve the discriminability, the class-distribution in the intra-domain is used to modify the discriminative loss, which can measure class-distance more accurately, and then learn more discriminative features.
2. RELATED WORK
In this section, we will introduce the related work in two aspects: adversarial-based domain adaptation and multi-view learning.
2.1 Adversarial-based Domain Adaptation
In recent years, adversarial-based methods have become popular. In this case, a new component domain discriminator and a two-player minimax game are introduced to realize domain confusion. DANN [9] first introduces adversarial learning into domain adaptation, and the domain invariant feature is obtained in an adversarial manner in which the domain discriminator cannot distinguish the domain label of the feature extracted by the feature extractor. Although DANN [9] achieves impressive results, it only aligns the global distribution without further consideration of multi-mode structure information. Following it, MADA [12] uses multiple domain discriminators to achieve fine-grained alignment. To further consider the importance of marginal and conditional distribution, DAAN [8] proposes a dynamic adversarial factor to evaluate the relative importance of the marginal and conditional distributions dynamically in adversarial domain adaptation. Moreover, JADA [11] matches domain-level and class-level distributions at the same time, achieving better results than only matching one of them. In addition, some approaches are beginning to introduce semantic information into domain adaptation. MSTN [13] tries to learn the semantic representations for unlabeled target samples by aligning the center of the labeled source and pseudo-labeled target samples. DSR [10] uses a variational auto-encoder and a dual adversarial network to learn disentangled semantic representation in which the semantic latent variables are independent of the domain latent variables, thus we can classify the labels more easily. To improve the representation, MCD[15] uses task-specific decision boundaries, and MSTN [13] uses pseudo labels to align the class-distribution across domains.
Although the adversarial-based approaches mentioned above have achieved impressive results, recent studies [7] have shown that the adversarial-based methods can cause loss of discriminability to some extent while improving transferability, both of which are key factors of domain adaptation. And there have been several attempts to solve this problem. BSP [7] finds that the eigenvector with the largest singular value will determine the transferability of the feature while transferability is enhanced at the expense of over-penalizing other eigenvectors, which contain rich structures and are critical for discriminability. To solve this problem, it tries to penalize the eigenvector with maximum singular value, so that other eigenvectors are relatively enhanced to improve the discriminability. AADA [6] proposes a new asymmetric adversarial scheme, in which the traditional domain discriminator is replaced by the autoencoder and only the target sample is added to adversarial training, thus avoiding the loss of discriminability in the traditional domain adversarial training.
2.2 Multi-View Learning
Multi-view learning is a way to learn the different descriptions and characterizations of the same object. With the different processes of data gathering, different feature representations of the same object can be obtained. However, these different representations contain complementary information, which can provide a more comprehensive representation or description of the object. Many domain adaptation approaches utilize multi-view learning to address cross-language text classification problems. These documents in different languages can be viewed as different views of the original document. MVTL-LM [16] proposes a multi-view transfer learning framework to achieve consistency between multiple views. However, the application of multi-view in cross-domain image classification has not been fully explored, MRAN [17] proposes a multi-representation adaptation network that tries to learn multiple different feature representations via an Inception Adaptation Module(IAM) and align distributions of them. Because more information is contained in the feature, better performance is achieved than the single representation adaptation.
3. OUR METHOD
In this section, we will cover the details of our method for unsupervised domain adaptation. And we are given a source domain with ns labeled samples and a target domain with nt unlabeled samples. The goal of our work is to predict the target labels by transferring the knowledge learned from the source domain.
The framework of our work is shown in Fig. 1. The first component is the multi-view feature extractor. A general convolutional neural network (CNN) is used to get a low-pixel image, and then different networks are used to extract multi-view feature spaces to enrich and enlarge the feature representations. The second component is multi-view discriminative adversarial training. The modified discriminative loss is combined with the discriminator loss to learn the transferrable and discriminative features for DA.
3.1 Multi-View Feature Extractor
Although adversarial-based methods can improve transferability effectively, they also cause a loss of discriminability to some extent at the same time. To solve the problem, we borrow the idea from multiview learning to counter the loss of discriminability. The description of different views of the same object contains different information, which can increase the diversity of features and thus increase the discriminative information contained in the feature representations. In our method, we try to learn multiple different representations to enrich the discriminative information contained in the final feature representation.
In the usual way, we can get multiple different feature representations through training multiple convolutional networks, but this operation is very time-consuming. Thus we introduce an Inception Adaptation Module (IAM) [17] containing four different substructures, S1, S2, S3, and S4 to get four different feature representations and make them have different dimensions. Firstly, we use a general convolutional neural network preprocessing the raw data to get a low-pixel image. Secondly, the IAM is used to extract different features and the features with different dimensions will contain more complementary information, thus we can get the different feature representations.
It is thought that different networks will learn different representations for the same sample. In our method, four different convolutional neural networks are used to learn four different representations of images. Compared with single-view learning, the combination of four representations contains more and some duplicated information, which is prepared to resist the over-penalty of the adversarial model.
Specifically, on the one hand, the four representations contain different information, which means they all contain more information about this object. On the other hand, there must be redundant or duplicate information among the four representations. In addition, learning multiple-view representations using IAM does increase the time cost to an extent. With more time cost, it improves the performance of adversarial-based DA. Especially, the time cost of multi-view representations will be less than four times of single-view. It is because the four subnetworks share the convolutional layer and only differ in the representation layer.
3.2 Multi-view Discriminative Adversarial Training
To make the multi-view representations more transferable and discriminative, our method introduces the discriminative loss into the discriminator loss. The traditional domain adaptation methods only focus on the discriminator loss, which tries to reduce the domain discrepancy. Different from these methods, we furthermore consider the discriminative loss, which is calculated by maximizing the inter-class distance and minimizing the intra-class distance among source samples at the same time. The discriminative loss is shown in Fig. 2.
The discriminative domain adaptation loss is formulated as follows:
The first part of Formula 1 is the discriminator loss LD, and it is the traditional domain confusion loss calculated in an adversarial manner. But different from previous works, we need to align multiple pairs of representations separately rather than a single representation. Thus LD is calculated as follows.
where L is the cross-entropy loss function and nr is the num of representations. dj means the domain label of the samples, 0 for the source domain, and 1 for the target domain. Gf and Gd are feature extractor and domain discriminator respectively.
The second part of Formula 1 is the discriminative domain adaptation loss Ldis, which is formulated as follows:
where xis,c means that the i-th sample of the source domain belongs to class c and μ is a parameter.
We attempt to maximize the inter-class distance and minimize the intra-class distance between source domain samples to improve the discriminability. The source domain and the target domain are highly close in the shared space. Therefore, by making the sample distribution of the same class in the source domain closer, and separating samples of different classes, the target domain aligned with the source domain will also become more discriminative.
It is worth noting the difference between our discriminative loss and others [18]. Owing to that there is no label in the target domain, the pseudo labels of the target domain have to be used to calculate the distance between the samples of the source domain and the target domain according to the classes [18]. The accuracy of the pseudo labels will have a great influence on the experimental results. Thus in this paper, we do not rely on the pseudo labels of the target domain, only calculate the discriminative loss on source samples. Through the application of discriminative loss, we can make the sample distribution of the same class closer and the boundary between different classes clearer, which will undoubtedly make the features to be more discriminative. And through the combination of domain confusion loss and discriminative loss, we can align the feature distributions of the source and target domain and ensure the discriminative ability of the feature.
For each view, we can get a representation with the LDDA. And the multi-view different feature representations will be got with the multi-view feature spaces, namely f1, f2, f3, and f4. And the final feature representation is represented as f = f1 ⊕ f2 ⊕ f3 ⊕ f4. ⊕ represents the concatenation of the feature. By combining these different features, we get a better domain invariant representation, which includes more discriminative information. In this way, the presentations are enriched.
3.3 The Overall Training Objective
Except for the loss function presented above, we also need to use supervised source domain data to train an effective classifier Gy:
And previous studies have shown that entropy minimization can improve the discrimination of the model for target data:
where ŷk represents the probability of classifying the sample x to label k, that is, the softmax output of the classifier. Same as the prior work, we only use entropy minimization to update the feature extractor.
The overall objective function and the optimization procedure can be formulated as
where λ ∈ [0, 1] is a trade-off parameter.
With the above, the pseudocode of our method is shown in Algorithm 1.
4. EXPERIMENT
We conduct experiments on three benchmark datasets to evaluate the effectiveness of our method.
4.1 Setup
Office-31 is a popular benchmark dataset for domain adaptation, which contains 4110 images from 31 classes and it consists of three domains: Amazon(A), DSLR(D), Webcam(W). We evaluate our method on all six adaptation tasks with standard evaluation protocol.
ImageCLEF-DA is a dataset for the ImageCLEF 2014 domain adaptation challenge, which includes three domains: Caltech-256(C), ImageNet ILSVRC 2012(I), Pascal VOC 2012(P). Each domain is the same size, containing 12 classes and 50 images per class. And we consider all six adaptation tasks.
Office-Home is a more challenging dataset than Office-31 and ImageCLEF-DA because it includes 15500 images from 65 classes, with four extremely distinct domains: Artistic images(Ar), Clipart(Cl) images, Product(Pr) images, and Real-World(Rw) images. We consider all twelve adaptation tasks.
We compare our method with some classical and latest domain adaptation methods including nonadversarial methods [4], [5], [17] and adversarial methods [6], [7], [19], [9], [11], [20], [21], [12], [15]. For a fair comparison, the results are copied from the original paper if available.
We implement our method based on PyTorch. For all datasets, we use ResNet-50 as the backbone network, and it is pre-trained on the ImageNet dataset. And we follow the standard evaluation protocols for domain adaptation as in DANN [9]. We use mini-batch stochastic gradient descent to update parameters, the momentum is set to 0.9 and the base learning rate for the feature extractor is 0.001, and the learning rate of the classifier is 10 times that of the feature extractor. And the learning annealing rate is adjusted by as DANN [9], where η0 = 0.01, α = 10, β = 0.75 and p denotes the training progress linearly changing from 0 to 1. To reduce the influence of noise at the early stage, λ is gradually increased from 0 to 1 by a schedule [22]: and γ is fixed to 10. And we set the μ in the discriminative loss to 0.1. We set the structure used to extract different features as S1(conv1×1, conv5×5), S2(conv1×1, conv3×3, conv3×3), S3(conv1×1) and S4(pool, conv1×1), and is borrowed from IAM [17], so the representation num nr is 4 in our method. IAM is inspired by GoogLeNet [23], using the inception module to fuse multiple representations.
4.2 Results
The classification results on OfficeHome, ImageCLEF-DA, and Office-31 datasets are shown in Table I, Table II, and Table III respectively. For a fair comparison, all baseline methods use ResNet50 as the backbone network. The result in the first line shows accuracies when directly applying the classifier trained in the source domain to the target domain, so domain adaptation methods can solve the domain shift problem effectively.
Methods . | Ar-Cl . | Ar-Pr . | Ar-Rw . | Cl-Ar . | Cl-Pr . | Cl-Rw . | Pr-Ar . | Pr-Cl . | Pr-Rw . | Pr-Ar . | Rw-Cl . | Rw-Pr . | AVG . |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
ResNet [24] | 34.9 | 50.0 | 58.0 | 37.4 | 41.9 | 46.2 | 38.5 | 31.2 | 60.4 | 53.9 | 41.2 | 59.9 | 46.1 |
DAN[4] | 43.6 | 57.0 | 67.9 | 45.8 | 56.5 | 60.4 | 44.0 | 43.6 | 67.7 | 63.1 | 51.5 | 74.3 | 56.3 |
JAN[5] | 45.9 | 61.2 | 68.9 | 50.4 | 59.7 | 61.0 | 45.8 | 43.4 | 70.3 | 63.9 | 52.4 | 76.8 | 58.3 |
DANN[9] | 45.6 | 59.3 | 70.1 | 47.0 | 58.5 | 60.9 | 46.1 | 43.7 | 68.5 | 63.2 | 51.8 | 76.8 | 57.6 |
CDAN+E[19] | 50.7 | 70.6 | 76.0 | 57.6 | 70.0 | 70.0 | 57.4 | 50.9 | 77.3 | 70.9 | 56.7 | 81.6 | 65.8 |
BSP+DANN[7] | 51.4 | 68.3 | 75.9 | 56.0 | 67.8 | 68.8 | 57.0 | 49.6 | 75.8 | 70.4 | 57.1 | 80.6 | 64.9 |
BSP+CDAN[7] | 52.0 | 68.6 | 76.1 | 58.0 | 70.3 | 70.2 | 58.6 | 50.2 | 77.6 | 72.2 | 59.3 | 81.9 | 66.3 |
HAN[21] | 52.0 | 72.0 | 75.8 | 59.6 | 71.8 | 71.2 | 58.7 | 51.3 | 77.7 | 72.8 | 57.7 | 82.3 | 66.9 |
MRAN[17] | 53.8 | 68.6 | 75.0 | 57.3 | 68.5 | 68.3 | 58.5 | 54.6 | 77.5 | 70.4 | 60.0 | 82.2 | 66.2 |
AADA[6] | 54.0 | 71.3 | 77.5 | 60.8 | 70.8 | 71.2 | 59.1 | 51.8 | 76.9 | 71.0 | 57.4 | 81.8 | 67.0 |
DIAA[25] | 54.0 | 76.2 | 79.1 | 57.0 | 71.5 | 71.4 | 57.3 | 50.7 | 78.7 | 65.0 | 56.7 | 80.9 | 66.5 |
DMP[26] | 52.3 | 73.0 | 77.3 | 64.3 | 72.0 | 71.8 | 63.6 | 52.7 | 78.5 | 72.0 | 57.7 | 81.6 | 68.1 |
Ours | 54.9 | 70.2 | 76.5 | 62.5 | 73.7 | 72.9 | 63.2 | 55.4 | 80.3 | 72.1 | 61.3 | 83.8 | 68.9 |
Methods . | Ar-Cl . | Ar-Pr . | Ar-Rw . | Cl-Ar . | Cl-Pr . | Cl-Rw . | Pr-Ar . | Pr-Cl . | Pr-Rw . | Pr-Ar . | Rw-Cl . | Rw-Pr . | AVG . |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
ResNet [24] | 34.9 | 50.0 | 58.0 | 37.4 | 41.9 | 46.2 | 38.5 | 31.2 | 60.4 | 53.9 | 41.2 | 59.9 | 46.1 |
DAN[4] | 43.6 | 57.0 | 67.9 | 45.8 | 56.5 | 60.4 | 44.0 | 43.6 | 67.7 | 63.1 | 51.5 | 74.3 | 56.3 |
JAN[5] | 45.9 | 61.2 | 68.9 | 50.4 | 59.7 | 61.0 | 45.8 | 43.4 | 70.3 | 63.9 | 52.4 | 76.8 | 58.3 |
DANN[9] | 45.6 | 59.3 | 70.1 | 47.0 | 58.5 | 60.9 | 46.1 | 43.7 | 68.5 | 63.2 | 51.8 | 76.8 | 57.6 |
CDAN+E[19] | 50.7 | 70.6 | 76.0 | 57.6 | 70.0 | 70.0 | 57.4 | 50.9 | 77.3 | 70.9 | 56.7 | 81.6 | 65.8 |
BSP+DANN[7] | 51.4 | 68.3 | 75.9 | 56.0 | 67.8 | 68.8 | 57.0 | 49.6 | 75.8 | 70.4 | 57.1 | 80.6 | 64.9 |
BSP+CDAN[7] | 52.0 | 68.6 | 76.1 | 58.0 | 70.3 | 70.2 | 58.6 | 50.2 | 77.6 | 72.2 | 59.3 | 81.9 | 66.3 |
HAN[21] | 52.0 | 72.0 | 75.8 | 59.6 | 71.8 | 71.2 | 58.7 | 51.3 | 77.7 | 72.8 | 57.7 | 82.3 | 66.9 |
MRAN[17] | 53.8 | 68.6 | 75.0 | 57.3 | 68.5 | 68.3 | 58.5 | 54.6 | 77.5 | 70.4 | 60.0 | 82.2 | 66.2 |
AADA[6] | 54.0 | 71.3 | 77.5 | 60.8 | 70.8 | 71.2 | 59.1 | 51.8 | 76.9 | 71.0 | 57.4 | 81.8 | 67.0 |
DIAA[25] | 54.0 | 76.2 | 79.1 | 57.0 | 71.5 | 71.4 | 57.3 | 50.7 | 78.7 | 65.0 | 56.7 | 80.9 | 66.5 |
DMP[26] | 52.3 | 73.0 | 77.3 | 64.3 | 72.0 | 71.8 | 63.6 | 52.7 | 78.5 | 72.0 | 57.7 | 81.6 | 68.1 |
Ours | 54.9 | 70.2 | 76.5 | 62.5 | 73.7 | 72.9 | 63.2 | 55.4 | 80.3 | 72.1 | 61.3 | 83.8 | 68.9 |
Method . | I→P . | P→I . | I→C . | C→I . | C→P . | P→C . | AVG . |
---|---|---|---|---|---|---|---|
ResNet [24] | 74.8±0.3 | 83.9±0.1 | 91.5±0.3 | 78.0±0.2 | 65.5±0.3 | 91.2±0.3 | 80.7 |
DANN[9] | 75.0±0.6 | 86.0±0.3 | 96.2±0.4 | 87.0±0.5 | 74.3±0.5 | 91.5±0.6 | 85.0 |
MADA[12] | 75.0±0.3 | 87.9±0.2 | 96.0±0.3 | 88.8±0.3 | 75.2±0.2 | 92.2±0.3 | 85.8 |
CDAN+E[19] | 77.7±0.3 | 90.7±0.2 | 97.7±0.3 | 91.3±0.3 | 74.2±0.2 | 94.3±0.3 | 87.7 |
MCD[15] | 77.3 | 89.2 | 92.7 | 88.2 | 71.0 | 92.3 | 85.1 |
JADA[11] | 78.2 | 90.1 | 95.9 | 90.8 | 76.8 | 94.1 | 87.7 |
HAN[21] | 77.9±0.4 | 91.7±0.1 | 97.0±0.2 | 91.9±0.1 | 76.7±0.1 | 95.3±0.3 | 88.4 |
MRAN[17] | 78.8±0.3 | 91.7±0.4 | 95.0±0.5 | 93.5±0.4 | 77.7±0.5 | 93.1±0.3 | 88.3 |
AADA[6] | 79.2 | 92.5 | 96.2 | 91.4 | 76.1 | 94.7 | 88.4 |
DMP[26] | 80.7±0.1 | 92.5±0.1 | 97.2±0.1 | 90.5±0.1 | 77.7±0.2 | 96.2±0.2 | 89.1 |
DIAA[25] | 78.8 | 90.5 | 94.8 | 93.7 | 78.5 | 92.8 | 88.2 |
Ours | 78.5±0.3 | 93.7±0.2 | 96.8±0.1 | 94.0±0.5 | 79.7±0.2 | 95.5±0.3 | 89.7 |
Method . | I→P . | P→I . | I→C . | C→I . | C→P . | P→C . | AVG . |
---|---|---|---|---|---|---|---|
ResNet [24] | 74.8±0.3 | 83.9±0.1 | 91.5±0.3 | 78.0±0.2 | 65.5±0.3 | 91.2±0.3 | 80.7 |
DANN[9] | 75.0±0.6 | 86.0±0.3 | 96.2±0.4 | 87.0±0.5 | 74.3±0.5 | 91.5±0.6 | 85.0 |
MADA[12] | 75.0±0.3 | 87.9±0.2 | 96.0±0.3 | 88.8±0.3 | 75.2±0.2 | 92.2±0.3 | 85.8 |
CDAN+E[19] | 77.7±0.3 | 90.7±0.2 | 97.7±0.3 | 91.3±0.3 | 74.2±0.2 | 94.3±0.3 | 87.7 |
MCD[15] | 77.3 | 89.2 | 92.7 | 88.2 | 71.0 | 92.3 | 85.1 |
JADA[11] | 78.2 | 90.1 | 95.9 | 90.8 | 76.8 | 94.1 | 87.7 |
HAN[21] | 77.9±0.4 | 91.7±0.1 | 97.0±0.2 | 91.9±0.1 | 76.7±0.1 | 95.3±0.3 | 88.4 |
MRAN[17] | 78.8±0.3 | 91.7±0.4 | 95.0±0.5 | 93.5±0.4 | 77.7±0.5 | 93.1±0.3 | 88.3 |
AADA[6] | 79.2 | 92.5 | 96.2 | 91.4 | 76.1 | 94.7 | 88.4 |
DMP[26] | 80.7±0.1 | 92.5±0.1 | 97.2±0.1 | 90.5±0.1 | 77.7±0.2 | 96.2±0.2 | 89.1 |
DIAA[25] | 78.8 | 90.5 | 94.8 | 93.7 | 78.5 | 92.8 | 88.2 |
Ours | 78.5±0.3 | 93.7±0.2 | 96.8±0.1 | 94.0±0.5 | 79.7±0.2 | 95.5±0.3 | 89.7 |
Methods . | A→W . | D→W . | W→D . | A→D . | D→A . | W→A . | AVG . |
---|---|---|---|---|---|---|---|
ResNet [24] | 68.4±0.2 | 96.7±0.1 | 99.3±0.1 | 68.9±0.2 | 62.5±0.3 | 60.7±0.3 | 76.1 |
DAN[4] | 80.5±0.4 | 97.1±0.2 | 99.6±0.1 | 78.6±0.2 | 63.6±0.3 | 62.8±0.2 | 80.4 |
JAN[5] | 85.4±0.3 | 97.4±0.2 | 99.8±0.2 | 84.7±0.3 | 68.6±0.3 | 70.0±0.4 | 84.3 |
DANN[9] | 82.0±0.4 | 96.9±0.2 | 99.1±0.1 | 79.7±0.4 | 68.2±0.4 | 67.4±0.5 | 82.2 |
GTA[20] | 89.5±0.5 | 97.9±0.3 | 99.8±0.4 | 87.7±0.5 | 72.8±0.3 | 71.4±0.4 | 86.5 |
ADDA[27] | 86.2±0.5 | 96.2±0.3 | 98.4±0.3 | 77.8±0.3 | 69.5±0.4 | 68.9±0.5 | 82.9 |
MADA[12] | 90.0±0.1 | 97.4±0.1 | 99.6±0.1 | 87.8±0.2 | 70.3±0.3 | 66.4±0.3 | 85.2 |
MRAN[17] | 91.4±0.1 | 96.9±0.3 | 99.8±0.2 | 86.4±0.6 | 68.3±0.5 | 70.9±0.6 | 85.6 |
DMP[26] | 93.0±0.3 | 99.0±0.1 | 100.0±0.0 | 91.0±0.4 | 71.4±0.2 | 70.2±0.2 | 87.4 |
Ours | 94.3±0.3 | 97.9±0.2 | 100.0±0.0 | 89.8±0.4 | 73.4±0.2 | 74.7±0.1 | 88.4 |
Methods . | A→W . | D→W . | W→D . | A→D . | D→A . | W→A . | AVG . |
---|---|---|---|---|---|---|---|
ResNet [24] | 68.4±0.2 | 96.7±0.1 | 99.3±0.1 | 68.9±0.2 | 62.5±0.3 | 60.7±0.3 | 76.1 |
DAN[4] | 80.5±0.4 | 97.1±0.2 | 99.6±0.1 | 78.6±0.2 | 63.6±0.3 | 62.8±0.2 | 80.4 |
JAN[5] | 85.4±0.3 | 97.4±0.2 | 99.8±0.2 | 84.7±0.3 | 68.6±0.3 | 70.0±0.4 | 84.3 |
DANN[9] | 82.0±0.4 | 96.9±0.2 | 99.1±0.1 | 79.7±0.4 | 68.2±0.4 | 67.4±0.5 | 82.2 |
GTA[20] | 89.5±0.5 | 97.9±0.3 | 99.8±0.4 | 87.7±0.5 | 72.8±0.3 | 71.4±0.4 | 86.5 |
ADDA[27] | 86.2±0.5 | 96.2±0.3 | 98.4±0.3 | 77.8±0.3 | 69.5±0.4 | 68.9±0.5 | 82.9 |
MADA[12] | 90.0±0.1 | 97.4±0.1 | 99.6±0.1 | 87.8±0.2 | 70.3±0.3 | 66.4±0.3 | 85.2 |
MRAN[17] | 91.4±0.1 | 96.9±0.3 | 99.8±0.2 | 86.4±0.6 | 68.3±0.5 | 70.9±0.6 | 85.6 |
DMP[26] | 93.0±0.3 | 99.0±0.1 | 100.0±0.0 | 91.0±0.4 | 71.4±0.2 | 70.2±0.2 | 87.4 |
Ours | 94.3±0.3 | 97.9±0.2 | 100.0±0.0 | 89.8±0.4 | 73.4±0.2 | 74.7±0.1 | 88.4 |
From the results, we can see that
Our method outperforms all baselines in most domain adaptation tasks in three datasets. This illustrates the superiority of our proposed method.
Methods considering both transferability and discriminability performs better than those just considering only one of them. Compared with DANN, MADA, CDAN and other adversarial-based methods, BSP, AADA and our method perform better because they consider both transferability and discriminability and focus on the over-penalty in adversarial learning. Especially, BSP+DANN and BSP+CDAN perform better than DANN and CDAN.
As for all methods considering discriminability, our method achieves state-of-the-art, which shows that multi-view is effective in enriching discriminative information contained in the domain invariant features. Especially, BSP improves discriminability by enhancing other eigenvectors. AADA is the latest method that proposes a new asymmetric adversarial scheme to avoid the loss of discriminability. Different from them, our method uses multiple representations to enrich the discriminative information contained in the domain invariant features, and also uses a discriminative loss to improve the discriminability, achieving better results. It indicates that our method can improve the transferability and discriminability simultaneously.
Both MRAN and our method learn multiple feature representations. MRAN uses the traditional MMD distance measurement, while our method uses the adversarial manner. It can be seen that compared with DAN and DANN, the performance of the multi-representation adaptation method is better than that of the single-representation method. This illustrates that multi-view representations can enrich the information contained in the domain invariant features. At the same time, compared with MRAN, our method further uses a discriminative loss to achieve better results than MRAN.
4.3 Analysis
Spectral Analysis In this section, we will further prove our method can resist over-penalty in adversarial models and maintains transferability. Previous studies [5] have proposed that Singular Values (SV) and Corresponding Angles (CA) of eigenvectors obtained by Singular Value Decomposition (SVD) of representation, can be used to compare discriminability. Inspired by this, we conduct an experiment on a more difficult task D→A. Firstly, we apply SVD to the source feature matrix and target feature matrix to compute the singular values and eigenvectors.
where b is the batch size, Ut denotes the eigenvectors, Σt denotes the eigenvalue and Vt is a unitary matrix.
In Fig. 2(a), we plot the normalized singular values of three models including ResNet, DANN, and Ours. It can be observed that the maximum singular value of the DANN feature matrix is significantly larger than other singular values, which will impair the information signal of the eigenvectors with smaller singular values. In comparison, our method can effectively reduce the large gap between the maximum and other singular values, which preserves more discriminability in feature learning.
In Fig. 2(b), we plot the normalized corresponding angles of singular values. The corresponding angle is defined as the angle between two eigenvectors corresponding to the same singular value index, which shows the transferability of the features. For DANN, the sharp decay trend indicates that the eigenvector with the largest singular value dominates the transferability of feature representation, thus the transferability is enhanced at the expense of over-penalty other eigenvectors that embody rich structures crucial for discriminability. However, our method gives consecutive eigenvectors a more prominent role during the transfer process.
In Fig. 2(c), we plot the A-distance which is a measure of domain discrepancy that reflects the transferability of feature representations. It is defined as dA = 2(1 - 2ɛ), where ɛ is the error rate of a domain classifier. And the A-distance with features of our method is smaller than DANN, which proves that our methods not only enhance the discriminability but also have great transferability.
Ablation Study In this section, we will study the performance of different components in our model on the Office-31 dataset. It mainly includes two parts, the influence of discriminative loss and the representation number on experimental results. The result of this experiment is shown in Table IV. In the first part, we can see the importance of the discriminative loss. w/o discriminative loss removes the discriminative loss in our model, and MMD for discriminability replaces the discriminative loss with MMD for discriminability in JPDA [18]. w/o discriminative loss performs better than MMD for discriminability, which shows that calculating discriminative loss in our method is effective. Then we further discuss the effect of representation quantity on experimental results. The single feature removes the multiple representations and only uses a single representation to perform domain adaptation. From the results, we can see that as the number of representations increases, we can get better results. And as with MRAN [17], we only consider using four representations at most. When either part was removed, the accuracy of the experiment declined. We can see that every component in our framework is necessary.
. | A→W . | D→W . | W→D . | A→D . | D→A . | W→A . | AVG . |
---|---|---|---|---|---|---|---|
w/o discriminative loss | 93.6 | 97.7 | 100.0 | 89.0 | 72.4 | 72.5 | 87.5 |
MMD for discriminability | 93.1 | 97.1 | 100.0 | 88.4 | 71.8 | 73.6 | 87.3 |
single representation | 90.1 | 97.7 | 100.0 | 88.2 | 68.5 | 70.1 | 85.8 |
two representation | 93.8 | 96.7 | 100.0 | 89.3 | 68.1 | 71.2 | 86.5 |
three representation | 94.0 | 97.9 | 100.0 | 90.2 | 70.4 | 73.2 | 87.7 |
Ours | 94.3 | 97.9 | 100.0 | 89.8 | 73.4 | 74.7 | 88.4 |
. | A→W . | D→W . | W→D . | A→D . | D→A . | W→A . | AVG . |
---|---|---|---|---|---|---|---|
w/o discriminative loss | 93.6 | 97.7 | 100.0 | 89.0 | 72.4 | 72.5 | 87.5 |
MMD for discriminability | 93.1 | 97.1 | 100.0 | 88.4 | 71.8 | 73.6 | 87.3 |
single representation | 90.1 | 97.7 | 100.0 | 88.2 | 68.5 | 70.1 | 85.8 |
two representation | 93.8 | 96.7 | 100.0 | 89.3 | 68.1 | 71.2 | 86.5 |
three representation | 94.0 | 97.9 | 100.0 | 90.2 | 70.4 | 73.2 | 87.7 |
Ours | 94.3 | 97.9 | 100.0 | 89.8 | 73.4 | 74.7 | 88.4 |
Sensitiveness of Parameters
μ is a trade-off parameter between the intra-class distance and the inter-class distance. Overall speaking, the performance of our method is not sensitive to μ. The curve in Fig. 5 does not fluctuate much with the changing of μ. And the best performance is achieved when μ falls in [0.08-0.13].
Feature Visualization To show the adaptive effect of the method more intuitively, we use t-SNE[28] embeddings to get the feature learned by DANN[9] and our method on the adaptation task A →W of the Office-31 dataset and Art → Product of OfficeHome dataset. The visualization result is shown in Fig. 3. We can see that compared with the traditional adversarial-based methods that only consider transferability, our method can better align the corresponding classes of the source domain and target domain. Samples of the same class of the source domain are more closely distributed, while samples of different classes have clear boundaries, and the target domain aligned with the source domain can also be well distinguished. It works well in the Office-31 dataset that differs slightly between domains, as well as in OfficeHome datasets that differ greatly.
5. CONCLUSION
In this paper, we propose an unsupervised domain adaptation method to improve the discriminability while maintaining the transferability by learning multiple different feature representations to enrich the discriminative information contained in domain-invariant features. At the same time, we further introduce a discriminative loss to maximize the inter-class distance and minimize the intra-class distance to make it easier to distinguish between classes.
In the near future, we will further explore the relationship between the number of representations and the transfer performance, as well as between representations, to design a better multi-view model for domain adaptation, while trying to achieve fine-grained alignment.
ACKNOWLEDGMENTS
This work is supported in part by the National Natural Science Foundation of China under grants (62076087, 61976077), and Anhui Provincial Natural Science Foundation under grants (2208085MF170).
AUTHORS’ CONTRIBUTIONS
Yuhong Zhang ([email protected], ORCID: 0000-0001-7031-0889) was responsible for the Conception and design of study, analysis and/or interpretation of data, and preparation of the original manuscript as well as supervision and project management. Jianqing Wu ([email protected]) contributed to the model optimization, results analysis, revising the manuscript critically for important intellectual content and response to the reviews. Qi Zhang([email protected]) contributed to the preparation of the original manuscript, and experimental design. Xuegang Hu([email protected], ORCID: 0000-0001-5421-6171) is jointly responsible for supervision and project management.