As NNs (Neural Networks) permeate various scientific and industrial domains, understanding the universality and reusability of their representations becomes crucial. At their core, these networks create intermediate neural representations, indicated as latent spaces, of the input data and subsequently leverage them to perform specific downstream tasks. This dissertation focuses on the universality and reusability of neural representations. Do the latent representations crafted by a NN remain exclusive to a particular trained instance, or can they generalize across models, adapting to factors such as randomness during training, model architecture, or even data domain? This adaptive quality introduces the notion of Latent Communication – a phenomenon that describes when representations can be unified or reused across neural spaces. A salient observation from our research is the emergence of similarities in latent representations, even when these originate from distinct or seemingly unrelated NNs. By exploiting a partial correspondence between the two data distributions that establishes a semantic link, we found that these representations can either be projected into a universal representation (Moschella* , Maiorca* , et al., 2023), coined as Relative Representation, or be directly translated from one space to another (Maiorca* et al., 2023). Intriguingly, this holds even when the transformation relating the spaces is unknown (Cannistraci, Moschella, Fumero, et al., 2024) and when the semantic bridge between them is minimal (Cannistraci, Moschella, Maiorca, et al., 2023). Latent Communication allows for a bridge between independently trained NN, irrespective of their training regimen, architecture, or the data modality they were trained on – as long as the data semantic content stays the same (e.g., images and their captions). This holds true for both generation, classification and retrieval downstream tasks; in supervised, weakly supervised, and unsupervised settings; and spans various data modalities including images, text, audio, and graphs – showcasing the universality of the Latent Communication phenomenon. From a practical standpoint, our research offers the potential to repurpose and reuse models, circumventing the need for resource-intensive retraining; enables the transfer of knowledge across them; and allows for downstream performance evaluation directly in the latent space. Indeed, several works leveraged the insights from our Latent Communication research (Kiefer and Buckley, 2024; Z. Wu, Y. Wu, and Mou, 2024; Jian et al., 2023; Norelli, Fumero, et al., 2023; G. Wang et al., 2023). For example, relative representations have been instrumental in attaining state-of-the-art results in Weakly Supervised Vision-and-Language Pretraining (C. Chen et al., 2023). Reflecting its significance, (Moschella* , Maiorca* , et al., 2023) has been presented orally at ICLR 2023 and Latent Communication has been a central theme in the UniReps: Unifying Representations in Neural Models Workshop at NeurIPS 2023, co-organized by our team.

Latent communication in artificial neural networks / Moschella, Luca. - (2024 May 28).

Latent communication in artificial neural networks

MOSCHELLA, LUCA
28/05/2024

Abstract

As NNs (Neural Networks) permeate various scientific and industrial domains, understanding the universality and reusability of their representations becomes crucial. At their core, these networks create intermediate neural representations, indicated as latent spaces, of the input data and subsequently leverage them to perform specific downstream tasks. This dissertation focuses on the universality and reusability of neural representations. Do the latent representations crafted by a NN remain exclusive to a particular trained instance, or can they generalize across models, adapting to factors such as randomness during training, model architecture, or even data domain? This adaptive quality introduces the notion of Latent Communication – a phenomenon that describes when representations can be unified or reused across neural spaces. A salient observation from our research is the emergence of similarities in latent representations, even when these originate from distinct or seemingly unrelated NNs. By exploiting a partial correspondence between the two data distributions that establishes a semantic link, we found that these representations can either be projected into a universal representation (Moschella* , Maiorca* , et al., 2023), coined as Relative Representation, or be directly translated from one space to another (Maiorca* et al., 2023). Intriguingly, this holds even when the transformation relating the spaces is unknown (Cannistraci, Moschella, Fumero, et al., 2024) and when the semantic bridge between them is minimal (Cannistraci, Moschella, Maiorca, et al., 2023). Latent Communication allows for a bridge between independently trained NN, irrespective of their training regimen, architecture, or the data modality they were trained on – as long as the data semantic content stays the same (e.g., images and their captions). This holds true for both generation, classification and retrieval downstream tasks; in supervised, weakly supervised, and unsupervised settings; and spans various data modalities including images, text, audio, and graphs – showcasing the universality of the Latent Communication phenomenon. From a practical standpoint, our research offers the potential to repurpose and reuse models, circumventing the need for resource-intensive retraining; enables the transfer of knowledge across them; and allows for downstream performance evaluation directly in the latent space. Indeed, several works leveraged the insights from our Latent Communication research (Kiefer and Buckley, 2024; Z. Wu, Y. Wu, and Mou, 2024; Jian et al., 2023; Norelli, Fumero, et al., 2023; G. Wang et al., 2023). For example, relative representations have been instrumental in attaining state-of-the-art results in Weakly Supervised Vision-and-Language Pretraining (C. Chen et al., 2023). Reflecting its significance, (Moschella* , Maiorca* , et al., 2023) has been presented orally at ICLR 2023 and Latent Communication has been a central theme in the UniReps: Unifying Representations in Neural Models Workshop at NeurIPS 2023, co-organized by our team.
28-mag-2024
Locatello, Francesco
File allegati a questo prodotto
File Dimensione Formato  
Tesi_dottorato_Moschella.pdf

accesso aperto

Note: tesi completa
Tipologia: Tesi di dottorato
Licenza: Creative commons
Dimensione 15.54 MB
Formato Adobe PDF
15.54 MB Adobe PDF

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11573/1711827
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus ND
  • ???jsp.display-item.citation.isi??? ND
social impact