Current Domain Adaptation (DA) methods based on deep architectures assume that the source samples arise from a single distribution. However, in practice most datasets can be regarded as mixtures of multiple domains. In these cases exploiting single-source DA methods for learning target classifiers may lead to sub-optimal, if not poor, results. In addition, in many applications it is difficult to manually provide the domain labels for all source data points, i.e . latent domains should be automatically discov- ered. This paper introduces a novel Convolutional Neural Network (CNN) architecture which (i) automatically discovers latent domains in visual datasets and (ii) exploits this information to learn robust target classifiers. Our approach is based on the introduction of two main components, which can be embedded into any existing CNN architecture: (i) a side branch that automatically computes the assignment of a source sample to a latent domain and (ii) novel layers that exploit domain membership information to appropriately align the distribution of the CNN internal feature representations to a reference distribution. We test our approach on publicly-available datasets, showing that it outperforms state-of-the-art multi-source DA methods by a large margin.

Boosting Domain Adaptation by Discovering Latent Domains / Mancini, Massimiliano; Porzi, Lorenzo; Rota Bulò, Samuel; Caputo, Barbara; Ricci, Elisa. - (2018), pp. 3771-3780. (Intervento presentato al convegno 31st Meeting of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2018 tenutosi a Salt Lake City; United States).

Boosting Domain Adaptation by Discovering Latent Domains

Massimiliano Mancini
;
Barbara Caputo;
2018

Abstract

Current Domain Adaptation (DA) methods based on deep architectures assume that the source samples arise from a single distribution. However, in practice most datasets can be regarded as mixtures of multiple domains. In these cases exploiting single-source DA methods for learning target classifiers may lead to sub-optimal, if not poor, results. In addition, in many applications it is difficult to manually provide the domain labels for all source data points, i.e . latent domains should be automatically discov- ered. This paper introduces a novel Convolutional Neural Network (CNN) architecture which (i) automatically discovers latent domains in visual datasets and (ii) exploits this information to learn robust target classifiers. Our approach is based on the introduction of two main components, which can be embedded into any existing CNN architecture: (i) a side branch that automatically computes the assignment of a source sample to a latent domain and (ii) novel layers that exploit domain membership information to appropriately align the distribution of the CNN internal feature representations to a reference distribution. We test our approach on publicly-available datasets, showing that it outperforms state-of-the-art multi-source DA methods by a large margin.
2018
31st Meeting of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2018
convolutional neural network; computer vision; domain adaptation; deep learning
04 Pubblicazione in atti di convegno::04b Atto di convegno in volume
Boosting Domain Adaptation by Discovering Latent Domains / Mancini, Massimiliano; Porzi, Lorenzo; Rota Bulò, Samuel; Caputo, Barbara; Ricci, Elisa. - (2018), pp. 3771-3780. (Intervento presentato al convegno 31st Meeting of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2018 tenutosi a Salt Lake City; United States).
File allegati a questo prodotto
File Dimensione Formato  
Mancini_Postprint_Boosting-Domain_2018.pdf

accesso aperto

Note: https://ieeexplore.ieee.org/document/8578495
Tipologia: Documento in Post-print (versione successiva alla peer review e accettata per la pubblicazione)
Licenza: Tutti i diritti riservati (All rights reserved)
Dimensione 1.1 MB
Formato Adobe PDF
1.1 MB Adobe PDF
Mancini_Boosting-Domain_2018.pdf

solo gestori archivio

Tipologia: Versione editoriale (versione pubblicata con il layout dell'editore)
Licenza: Tutti i diritti riservati (All rights reserved)
Dimensione 730.55 kB
Formato Adobe PDF
730.55 kB Adobe PDF   Contatta l'autore

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11573/1189877
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 112
  • ???jsp.display-item.citation.isi??? 81
social impact