img

Notice détaillée

Semi-Supervised Domain Generalization with Stochastic StyleMatch

Article Ecrit par: Zhou, Kaiyang ; Liu, Ziwei ; Loy, Chen Change ;

Résumé: Ideally, visual learning algorithms should be generalizable, for dealing with any unseen domain shift when deployed in a new target environment; and data-efficient, for reducing development costs by using as little labels as possible. To this end, we study semi-supervised domain generalization (SSDG), which aims to learn a domain-generalizable model using multi-source, partially-labeled training data. We design two benchmarks that cover state-of-the-art methods developed in two related fields, i.e., domain generalization (DG) and semi-supervised learning (SSL). We find that the DG methods, which by design are unable to handle unlabeled data, perform poorly with limited labels in SSDG; the SSL methods, especially FixMatch, obtain much better results but are still far away from the basic vanilla model trained using full labels. We propose StyleMatch, a simple approach that extends FixMatch with a couple of new ingredients tailored for SSDG: (1) stochastic modeling for reducing overfitting in scarce labels, and (2) multi-view consistency learning for enhancing domain generalization. Despite the concise designs, StyleMatch achieves significant improvements in SSDG. We hope our approach and the comprehensive benchmarks can pave the way for future research on generalizable and data-efficient learning systems.


Langue: Anglais