COCOA: Context-Conditional Adaptation for Recognizing Unseen Classes in Unseen Domains

Mangla, Puneet and Chandhok, Shivam and Balasubramanian, Vineeth N and et al, . (2022) COCOA: Context-Conditional Adaptation for Recognizing Unseen Classes in Unseen Domains. In: 22nd IEEE/CVF Winter Conference on Applications of Computer Vision, WACV 2022, 4 January 2022 through 8 January 2022, Waikoloa.

[img] Text
Proceedings_2022_IEEE_CVF3.pdf - Published Version
Restricted to Registered users only

Download (1MB) | Request a copy


Recent progress towards designing models that can generalize to unseen domains (i.e domain generalization) or unseen classes (i.e zero-shot learning) has embarked interest towards building models that can tackle both domain-shift and semantic shift simultaneously (i.e zero-shot domain generalization). For models to generalize to unseen classes in unseen domains, it is crucial to learn feature representation that preserves class-level (domain-invariant) as well as domain-specific information. Motivated from the success of generative zero-shot approaches, we propose a feature generative framework integrated with a COntext COnditional Adaptive (COCOA) Batch-Normalization layer to seamlessly integrate class-level semantic and domain-specific information. The generated visual features better capture the underlying data distribution enabling us to generalize to unseen classes and domains at test-time. We thoroughly evaluate our approach on established large-scale benchmarks - DomainNet, DomainNet-LS (Limited Sources) - as well as a new CUB-Corruptions benchmark, and demonstrate promising performance over baselines and state-of-the-art methods. We show detailed ablations and analysis to verify that our proposed approach indeed allows us to generate better quality visual features relevant for zero-shot domain generalization. © 2022 IEEE.

[error in script]
IITH Creators:
IITH CreatorsORCiD
Balasubramanian, Vineeth N
Item Type: Conference or Workshop Item (Paper)
Additional Information: In this work, we propose a unified generative frame-work for the ZSLDG problem setting that uses an elegant approach to encode class-level (domain-invariant) and domain-specific information. Our approach uses context conditional batch-normalization to integrate class-level semantic and domain-specific information into generated visual features, thereby enabling better generalization. We conduct extensive experiments on benchmark ZSLDG datasets and demonstrate the effectiveness of the proposed method. Furthermore, we show extensive analysis to validate our choice of using conditional batch-normalization to fuse semantic and domain-dependent characteristics. Our future work will include the development of better methods to effectively fuse and regulate the presence of semantic and context information to further improve generalization performance for unseen classes in unseen domains. Acknowledgement: This work has been partly supported by the funding received from DST through the IMPRINT program (IMP/2019/000250)
Uncontrolled Keywords: Few-shot; Semi- and Un- supervised Learning Deep Learning; Transfer
Subjects: Computer science
Divisions: Department of Computer Science & Engineering
Depositing User: . LibTrainee 2021
Date Deposited: 23 Jul 2022 08:46
Last Modified: 23 Jul 2022 08:46
Publisher URL:
Related URLs:

Actions (login required)

View Item View Item
Statistics for RAIITH ePrint 9885 Statistics for this ePrint Item