While different neural models often exhibit latent spaces that are alike when exposed to semantically related data, this intrinsic similarity is not always immediately discernible. Towards a better understanding of this phenomenon, our work shows how representations learned from these neural modules can be translated between different pre-trained networks via simpler transformations than previously thought. An advantage of this approach is the ability to estimate these transformations using standard, well-understood algebraic procedures that have closed-form solutions. Our method directly estimates a transformation between two given latent spaces, thereby enabling effective stitching of encoders and decoders without additional training. We extensively validate the adaptability of this translation procedure in different experimental settings: across various trainings, domains, architectures (e.g., ResNet, CNN, ViT), and in multiple downstream tasks (classification, reconstruction). Notably, we show how it is possible to zero-shot stitch text encoders and vision decoders, or vice-versa, yielding surprisingly good classification performance in this multimodal setting.

Latent Space Translation via Semantic Alignment / Maiorca, Valentino; Moschella, Luca; Norelli, Antonio; Fumero, Marco; Locatello, Francesco; Rodola', Emanuele. - (2023). (Intervento presentato al convegno Thirty-seventh Conference on Neural Information Processing Systems tenutosi a New Orleans, Lousiana, United States of America).

Latent Space Translation via Semantic Alignment

Valentino Maiorca
Co-primo
;
Luca Moschella
Co-primo
;
Antonio Norelli;Marco Fumero;Emanuele Rodola'
Ultimo
2023

Abstract

While different neural models often exhibit latent spaces that are alike when exposed to semantically related data, this intrinsic similarity is not always immediately discernible. Towards a better understanding of this phenomenon, our work shows how representations learned from these neural modules can be translated between different pre-trained networks via simpler transformations than previously thought. An advantage of this approach is the ability to estimate these transformations using standard, well-understood algebraic procedures that have closed-form solutions. Our method directly estimates a transformation between two given latent spaces, thereby enabling effective stitching of encoders and decoders without additional training. We extensively validate the adaptability of this translation procedure in different experimental settings: across various trainings, domains, architectures (e.g., ResNet, CNN, ViT), and in multiple downstream tasks (classification, reconstruction). Notably, we show how it is possible to zero-shot stitch text encoders and vision decoders, or vice-versa, yielding surprisingly good classification performance in this multimodal setting.
2023
Thirty-seventh Conference on Neural Information Processing Systems
latent space translation; relative representation; Procrustes analysis; zero-shot stitching; latent communication; representation learning; manifold alignment; multimodal
04 Pubblicazione in atti di convegno::04b Atto di convegno in volume
Latent Space Translation via Semantic Alignment / Maiorca, Valentino; Moschella, Luca; Norelli, Antonio; Fumero, Marco; Locatello, Francesco; Rodola', Emanuele. - (2023). (Intervento presentato al convegno Thirty-seventh Conference on Neural Information Processing Systems tenutosi a New Orleans, Lousiana, United States of America).
File allegati a questo prodotto
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11573/1698842
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo

Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus ND
  • ???jsp.display-item.citation.isi??? ND
social impact