Show simple item record

dc.contributor.advisorChakraborty, Anirban
dc.contributor.authorKumar, Vikash
dc.date.accessioned2022-11-15T07:14:00Z
dc.date.available2022-11-15T07:14:00Z
dc.date.submitted2022
dc.identifier.urihttps://etd.iisc.ac.in/handle/2005/5910
dc.description.abstractThough deep learning has achieved significant successes in many computer vision tasks, the state-of-the-art approaches rely on the availability of a large amount of labeled data for supervision, collection of which is expensive and time-consuming. Moreover, the performance of these models suffer when there is a mismatch between training and test data distributions. Motivated by this, we design Unsupervised Domain Adaptation (UDA) algorithms that address the distribution shifts without requiring label information in order to adapt to the new environment. In the first part of this work, we show that the class-aware frequency transformation obtained via self-training helps to reduce the style bias in the source dataset, thereby improving the target adaptation performance. Further, we address a more challenging and practical setting of source-free multi-target domain adaptation, where there is only one source, but multiple unlabeled target domains, and the source labels are assumed unavailable during target adaptation. We explore the utility of frequency transformation for reducing the style bias between the source and target domains (e.g., the bias between the synthetic images and the natural images, respectively). The performance of the existing UDA methods degrades when the domain gap between source and target distributions is significant. In order to bring these domains closer, we propose ‘Class Aware Frequency Transformation’ (CAFT), which utilizes pseudo label-based class aware low-frequency swapping of image-magnitude spectrum to reduce the domain gap and thereby improve the performance. When compared with the state-of-the-art generative methods, our proposed approach is computationally efficient and can easily be plugged into an existing UDA algorithm to improve its performance. Additionally, towards mitigating the pseudo-label noise, we introduce a novel approach based on the absolute difference between top-2 class prediction probabilities (ADT2P), which separates target pseudo labels into clean and noisy sets. Our proposed UDA strategy substantially benefits from utilizing these ‘clean samples’ only, thereby resulting in a further improvement in overall performance. We introduce the novel task of Source-free Single and Multi-target Domain Adaptation and propose a novel framework named Consistency with Nuclear-Norm Maximization and MixUp knowledge distillation (CoNMix) as a solution to this problem. The primary motivation of this work is to address the Single and Multi-target Domain Adaptation in the source-free paradigm, where access to labeled source data is restricted during target adaptation due to various practical privacy-related restrictions on data sharing. Our source-free approach leverages self-training using target pseudo labels to improve the target adaptation performance. We propose consistency between label preserving augmentations and utilize pseudo label refinement methods to reduce noisy pseudo labels. Further, to build one source-free Multi-target Domain Adaptation model using multiple single-target DA models, we use the concept of the MixUp Knowledge Distillation. We also demonstrate that by utilizing the modern Vision Transformers as backbones, we can obtain better feature representations leading to improved domain transferability and class discriminability. This further helps to boost the performance of source-free Single-target and Multi-target Domain Adaptation.en_US
dc.language.isoen_USen_US
dc.rightsI grant Indian Institute of Science the right to archive and to make available my thesis or dissertation in whole or in part in all forms of media, now hereafter known. I retain all proprietary rights, such as patent rights. I also retain the right to use in future works (such as articles or books) all or part of this thesis or dissertationen_US
dc.subjectDeep learningen_US
dc.subjectUnsupervised domain adaptationen_US
dc.subjectfrequency transformationen_US
dc.subject.classificationResearch Subject Categories::TECHNOLOGY::Information technology::Computer scienceen_US
dc.titleMitigating Domain Shift via Self-training in Single and Multi-target Unsupervised Domain Adaptationen_US
dc.typeThesisen_US
dc.degree.nameMTech (Res)en_US
dc.degree.levelMastersen_US
dc.degree.grantorIndian Institute of Scienceen_US
dc.degree.disciplineEngineeringen_US


Files in this item

This item appears in the following Collection(s)

Show simple item record