A long standing problem in visual object categorization is the ability of algorithms to generalize across different testing conditions. The problem has been formalized as a covariate shift among the probability distributions generating the training data (source) and the test data (target) and several domain adaptation methods have been proposed to address this issue. While these approaches have considered the single source-single target scenario, it is plausible to have multiple sources and require adaptation to any possible target domain. This last scenario, named Domain Generalization (DG), is the focus of our work. Differently from previous DG methods which learn domain invariant representations from source data, we design a deep network with multiple domain-specific classifiers, each associated to a source domain. At test time we estimate the probabilities that a target sample belongs to each source domain and exploit them to optimally fuse the classifiers predictions. To further improve the generalization ability of our model, we also introduced a domain agnostic component supporting the final classifier. Experiments on two public benchmarks demonstrate the power of our approach.

Best Sources Forward: Domain Generalization through Source-Specific Nets / Mancini, Massimiliano; Rota Bulò, Samuel; Caputo, Barbara; Ricci, Elisa. - (2018), pp. 1353-1357. (Intervento presentato al convegno 25th IEEE International Conference on Image Processing, ICIP 2018 tenutosi a Athens; Greece) [10.1109/ICIP.2018.8451318].

Best Sources Forward: Domain Generalization through Source-Specific Nets

Massimiliano Mancini
;
Barbara Caputo;
2018

Abstract

A long standing problem in visual object categorization is the ability of algorithms to generalize across different testing conditions. The problem has been formalized as a covariate shift among the probability distributions generating the training data (source) and the test data (target) and several domain adaptation methods have been proposed to address this issue. While these approaches have considered the single source-single target scenario, it is plausible to have multiple sources and require adaptation to any possible target domain. This last scenario, named Domain Generalization (DG), is the focus of our work. Differently from previous DG methods which learn domain invariant representations from source data, we design a deep network with multiple domain-specific classifiers, each associated to a source domain. At test time we estimate the probabilities that a target sample belongs to each source domain and exploit them to optimally fuse the classifiers predictions. To further improve the generalization ability of our model, we also introduced a domain agnostic component supporting the final classifier. Experiments on two public benchmarks demonstrate the power of our approach.
2018
25th IEEE International Conference on Image Processing, ICIP 2018
Computer architecture; Training; Computational modeling; Visualization; Semantics; Benchmark testing; Machine learning; Domain Generalization; Object Classification; Deep Learning
04 Pubblicazione in atti di convegno::04b Atto di convegno in volume
Best Sources Forward: Domain Generalization through Source-Specific Nets / Mancini, Massimiliano; Rota Bulò, Samuel; Caputo, Barbara; Ricci, Elisa. - (2018), pp. 1353-1357. (Intervento presentato al convegno 25th IEEE International Conference on Image Processing, ICIP 2018 tenutosi a Athens; Greece) [10.1109/ICIP.2018.8451318].
File allegati a questo prodotto
File Dimensione Formato  
Mancini_Postprint_Best-Sources-Forward_2018.pdf

accesso aperto

Note: https://ieeexplore.ieee.org/document/8451318
Tipologia: Documento in Post-print (versione successiva alla peer review e accettata per la pubblicazione)
Licenza: Tutti i diritti riservati (All rights reserved)
Dimensione 1.15 MB
Formato Adobe PDF
1.15 MB Adobe PDF
Mancini_Best-Sources-Forward_2018.pdf

solo gestori archivio

Tipologia: Versione editoriale (versione pubblicata con il layout dell'editore)
Licenza: Tutti i diritti riservati (All rights reserved)
Dimensione 2.16 MB
Formato Adobe PDF
2.16 MB Adobe PDF   Contatta l'autore

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11573/1189853
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 102
  • ???jsp.display-item.citation.isi??? 73
social impact