Mitigating Domain Shift via Self-training in Single and Multi-target Unsupervised Domain Adaptation
Abstract
Though deep learning has achieved significant successes in many computer vision tasks, the state-of-the-art approaches rely on the availability of a large amount of labeled data for supervision, collection of which is expensive and time-consuming. Moreover, the performance of these models suffer when there is a mismatch between training and test data distributions. Motivated by this, we design Unsupervised Domain Adaptation (UDA) algorithms that address the distribution shifts without requiring label information in order to adapt to the new environment. In the first part of this work, we show that the class-aware frequency transformation obtained via self-training helps to reduce the style bias in the source dataset, thereby improving the target adaptation performance. Further, we address a more challenging and practical setting of source-free multi-target domain adaptation, where there is only one source, but multiple unlabeled target domains, and the source labels are assumed unavailable during target adaptation.
We explore the utility of frequency transformation for reducing the style bias between the source and target domains (e.g., the bias between the synthetic images and the natural images, respectively). The performance of the existing UDA methods degrades when the domain gap between source and target distributions is significant. In order to bring these domains closer, we propose ‘Class Aware Frequency Transformation’ (CAFT), which utilizes pseudo label-based class aware low-frequency swapping of image-magnitude spectrum to reduce the domain gap and thereby improve the performance. When compared with the state-of-the-art generative methods, our proposed approach is computationally efficient and can easily be plugged into an existing UDA algorithm to improve its performance. Additionally, towards mitigating the pseudo-label noise, we introduce a novel approach based on the absolute difference between top-2 class prediction probabilities (ADT2P), which separates target pseudo labels into clean and noisy sets. Our proposed UDA strategy substantially benefits from utilizing these ‘clean samples’ only, thereby resulting in a further improvement in overall performance.
We introduce the novel task of Source-free Single and Multi-target Domain Adaptation and propose a novel framework named Consistency with Nuclear-Norm Maximization and MixUp knowledge distillation (CoNMix) as a solution to this problem. The primary motivation of this work is to address the Single and Multi-target Domain Adaptation in the source-free paradigm, where access to labeled source data is restricted during target adaptation due to various practical privacy-related restrictions on data sharing. Our source-free approach leverages self-training using target pseudo labels to improve the target adaptation performance. We propose consistency between label preserving augmentations and utilize pseudo label refinement methods to reduce noisy pseudo labels. Further, to build one source-free Multi-target Domain Adaptation model using multiple single-target DA models, we use the concept of the MixUp Knowledge Distillation. We also demonstrate that by utilizing the modern Vision Transformers as backbones, we can obtain better feature representations leading to improved domain transferability and class discriminability. This further helps to boost the performance of source-free Single-target and Multi-target Domain Adaptation.