Advancing Domain Generalization through Cross-Domain Class-Contrastive Learning and Addressing Data Imbalances


Saransh Dave

Abstract

This thesis delves into the critical field of Domain Generalization (DG) in machine learning, where models are trained on multiple source distributions with the objective of generalizing to unseen tar- get distributions. We begin by dissecting various facets of DG, including distribution shifts, shortcut learning, representation learning, and data imbalances. This foundational investigation sets the stage for understanding the challenges associated with DG and the complexities that arise. A comprehensive literature review is conducted, highlighting existing challenges and contextualizing our contributions to the field. The review encompasses learning invariant features, parameter sharing techniques, meta-learning techniques, and data augmentation approaches. One of the key contributions of this thesis is the examination of the role low-dimensional representa- tions play in enhancing DG performance. We introduce a method to compute the implicit dimensionality of latent representations, exploring its correlation with performance in a domain generalization context. This essential finding motivated us to further investigate the effects of low-dimensional representations. Building on these insights, we present Cross-Domain Class-Contrastive Learning (CDCC), a tech- nique that learns sparse representations in the latent space, resulting in lower-dimensional represen- tations and improved domain generalization performance. CDCC establishes competitive results on various DG benchmarks, comparing favorably with numerous existing approaches in DomainBed. Venturing beyond traditional DG, we discuss a series of experiments conducted for domain general- ization in long-tailed settings, which are common in real-world applications. Additionally, we present supplementary experiments yielding intriguing findings. Our analysis reveals that the CDCC approach exhibits greater robustness in long-tailed distributions and that the order of performances across test do- mains remains unaffected by the order of training domains in the long-tailed setting. This section aims to inspire researchers to further probe the outcomes of these experiments and advance the understanding of domain generalization. In conclusion, this thesis offers a well-rounded exploration of DG by combining a comprehensive literature review, the discovery of the importance of low-dimensional representations in DG, the devel- opment of the CDCC method, and the meticulous analysis of long-tailed settings and other experimental findings.

Year of completion:  October 2023
 Advisor : Vineet Gandhi

Related Publications


    Downloads

    thesis