OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 16.05.2026, 08:18

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Learning to Learn Single Domain Generalization

2020·447 Zitationen
Volltext beim Verlag öffnen

447

Zitationen

3

Autoren

2020

Jahr

Abstract

We are concerned with a worst-case scenario in model generalization, in the sense that a model aims to perform well on many unseen domains while there is only one single domain available for training. We propose a new method named adversarial domain augmentation to solve this Out-of-Distribution (OOD) generalization problem. The key idea is to leverage adversarial training to create "fictitious" yet "challenging" populations, from which a model can learn to generalize with theoretical guarantees. To facilitate fast and desirable domain augmentation, we cast the model training in a meta-learning scheme and use a Wasserstein Auto-Encoder (WAE) to relax the widely used worst-case constraint. Detailed theoretical analysis is provided to testify our formulation, while extensive experiments on multiple benchmark datasets indicate its superior performance in tackling single domain generalization.

Ähnliche Arbeiten

Autoren

Institutionen

Themen

Domain Adaptation and Few-Shot LearningMachine Learning in HealthcareMultimodal Machine Learning Applications
Volltext beim Verlag öffnen