Latent Graph Inference (LGI) relaxed the reliance of Graph Neural Networks (GNNs) on a given graph topology by dynamically learning it. However, most of LGI methods assume to have a (noisy, incomplete, improvable, ...) input graph to rewire and can solely learn regular graph topologies. In the wake of the success of Topological Deep Learning (TDL), we study Latent Topology Inference (LTI) for learning higher-order cell complexes (with sparse and not regular topology) describing multi-way interactions between data points. To this aim, we introduce the Differentiable Cell Complex Module (DCM), a novel learnable function that computes cell probabilities in the complex to improve the downstream task. We show how to integrate DCM with cell complex message-passing networks layers and train it in an end-to-end fashion, thanks to a two-step inference procedure that avoids an exhaustive search across all possible cells in the input, thus maintaining scalability. Our model is tested on several homophilic and heterophilic graph datasets and it is shown to outperform other state-of-the-art techniques, offering significant improvements especially in cases where an input graph is not provided.

From latent graph to latent topology inference: differentiable cell complex module / Battiloro, C.; Spinelli, I.; Telyatnikov, L.; Bronstein, M.; Scardapane, S.; Di Lorenzo, P.. - (2024). (Intervento presentato al convegno 12th International Conference on Learning Representations tenutosi a Vienna; Austria).

From latent graph to latent topology inference: differentiable cell complex module

Battiloro C.
Co-primo
;
Spinelli I.
Co-primo
;
Telyatnikov L.;Scardapane S.
Penultimo
;
Di Lorenzo P.
Ultimo
2024

Abstract

Latent Graph Inference (LGI) relaxed the reliance of Graph Neural Networks (GNNs) on a given graph topology by dynamically learning it. However, most of LGI methods assume to have a (noisy, incomplete, improvable, ...) input graph to rewire and can solely learn regular graph topologies. In the wake of the success of Topological Deep Learning (TDL), we study Latent Topology Inference (LTI) for learning higher-order cell complexes (with sparse and not regular topology) describing multi-way interactions between data points. To this aim, we introduce the Differentiable Cell Complex Module (DCM), a novel learnable function that computes cell probabilities in the complex to improve the downstream task. We show how to integrate DCM with cell complex message-passing networks layers and train it in an end-to-end fashion, thanks to a two-step inference procedure that avoids an exhaustive search across all possible cells in the input, thus maintaining scalability. Our model is tested on several homophilic and heterophilic graph datasets and it is shown to outperform other state-of-the-art techniques, offering significant improvements especially in cases where an input graph is not provided.
2024
12th International Conference on Learning Representations
graph neural networks; topology; cell complexes
04 Pubblicazione in atti di convegno::04b Atto di convegno in volume
From latent graph to latent topology inference: differentiable cell complex module / Battiloro, C.; Spinelli, I.; Telyatnikov, L.; Bronstein, M.; Scardapane, S.; Di Lorenzo, P.. - (2024). (Intervento presentato al convegno 12th International Conference on Learning Representations tenutosi a Vienna; Austria).
File allegati a questo prodotto
File Dimensione Formato  
Battiloro_Latent_2024.pdf

solo gestori archivio

Tipologia: Documento in Post-print (versione successiva alla peer review e accettata per la pubblicazione)
Licenza: Tutti i diritti riservati (All rights reserved)
Dimensione 2.36 MB
Formato Adobe PDF
2.36 MB Adobe PDF   Contatta l'autore
Battiloro_preprint_Latent_2023.pdf

accesso aperto

Tipologia: Documento in Pre-print (manoscritto inviato all'editore, precedente alla peer review)
Licenza: Tutti i diritti riservati (All rights reserved)
Dimensione 1.88 MB
Formato Adobe PDF
1.88 MB Adobe PDF

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11573/1722589
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 1
  • ???jsp.display-item.citation.isi??? ND
social impact