dc.description.abstract | Deep neural networks has brought tremendous success in many areas of computer vision, such
as image classification, retrieval, segmentation , etc. However, this success is
mostly measured under two conditions namely (1) the underlying distribution of the test data
is the same as the distribution of the data used for training the network and (2) The classes
available for testing is the same as the one in training. These assumptions are very restrictive
in nature and may not hold in real-life. Since new data categories are continuously being
discovered, so it is important for the trained models to generalize to classes which has not
been seen during training. Also, since the conditions under which the data is captured keeps
on changing, so it is important for the trained model to generalize across unseen domains,
which it has not encountered during training. Also, in general, the information about the class
(whether it belongs to a seen or unseen class) or domain will not be known a-priori.
Recently, researchers have started to address the challenging scenarios associated with
a deep network, when the testing conditions in terms of classes and domains are relaxed.
Towards that end, domain generalization (DG) for tasks like image classification, object detection,
etc. have gained significant attention. In this work, we focus on the image classification
task. In the first work, we address the scenario where the test data domain can be different
and in the second work, we address the even more general scenario (ZSDG), where both the
class and domain of the test data can be different from that of the training data. In DG, a
deep model is trained to generalize well on an unknown target domain, leveraging data from
multiple source domains during training for the task of image classification. In ZSDG, the aim
is to train the model using multiple source domains and attributes of the classes such that it
can generalize well to novel classes from out-of-distribution target data. | en_US |