The ability to generalize across visual domains is crucial for the robustness of artificial recognition systems. Although many training sources may be available in real contexts, the access to even unlabeled target samples cannot be taken for granted, which makes standard unsupervised domain adaptation methods inapplicable in the wild. In this work we investigate how to exploit multiple sources by hallucinating a deep visual domain composed of images, possibly unrealistic, able to maintain categorical knowledge while discarding specific source styles. The produced agnostic images are the result of a deep architecture that applies pixel adaptation on the original source data guided by two adversarial domain classifier branches at image and feature level. Our approach is conceived to learn only from source data, but it seamlessly extends to the use of unlabeled target samples. Remarkable results for both multi-source domain adaptation and domain generalization support the power of hallucinating agnostic images in this framework.

Hallucinating Agnostic Images to Generalize Across Domains / Carlucci, Fm; Russo, P; Tommasi, T; Caputo, B. - (2019), pp. 3227-3234. (Intervento presentato al convegno 2019 IEEE/CVF International Conference on Computer Vision Workshop (ICCVW) tenutosi a Seoul, Korea) [10.1109/ICCVW.2019.00403].

Hallucinating Agnostic Images to Generalize Across Domains

Carlucci, FM
Primo
Investigation
;
Russo, P
Secondo
Conceptualization
;
Tommasi, T
Penultimo
Conceptualization
;
Caputo, B
Ultimo
Conceptualization
2019

Abstract

The ability to generalize across visual domains is crucial for the robustness of artificial recognition systems. Although many training sources may be available in real contexts, the access to even unlabeled target samples cannot be taken for granted, which makes standard unsupervised domain adaptation methods inapplicable in the wild. In this work we investigate how to exploit multiple sources by hallucinating a deep visual domain composed of images, possibly unrealistic, able to maintain categorical knowledge while discarding specific source styles. The produced agnostic images are the result of a deep architecture that applies pixel adaptation on the original source data guided by two adversarial domain classifier branches at image and feature level. Our approach is conceived to learn only from source data, but it seamlessly extends to the use of unlabeled target samples. Remarkable results for both multi-source domain adaptation and domain generalization support the power of hallucinating agnostic images in this framework.
2019
2019 IEEE/CVF International Conference on Computer Vision Workshop (ICCVW)
deep learning; domain adaptation; domain generalization;
04 Pubblicazione in atti di convegno::04b Atto di convegno in volume
Hallucinating Agnostic Images to Generalize Across Domains / Carlucci, Fm; Russo, P; Tommasi, T; Caputo, B. - (2019), pp. 3227-3234. (Intervento presentato al convegno 2019 IEEE/CVF International Conference on Computer Vision Workshop (ICCVW) tenutosi a Seoul, Korea) [10.1109/ICCVW.2019.00403].
File allegati a questo prodotto
File Dimensione Formato  
Carlucci_Hallucinating-Agnostic_2019.pdf

accesso aperto

Tipologia: Versione editoriale (versione pubblicata con il layout dell'editore)
Licenza: Tutti i diritti riservati (All rights reserved)
Dimensione 378.27 kB
Formato Adobe PDF
378.27 kB Adobe PDF
Carlucci_preprint_Hallucinating-Agnostic_2019.pdf.pdf

accesso aperto

Tipologia: Documento in Pre-print (manoscritto inviato all'editore, precedente alla peer review)
Licenza: Creative commons
Dimensione 1.76 MB
Formato Adobe PDF
1.76 MB Adobe PDF

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11573/1658847
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 31
  • ???jsp.display-item.citation.isi??? 22
social impact