Integrating transformers with graph representation learning has emerged as a research focal point. However, recent studies showed that positional encoding in Transformers does not capture enough structural information between nodes. Additionally, existing graph neural network (GNN) models face the oversquashing issue, impeding information retention from distant nodes. To address, we transform graphs into regular structures, such as tokens, to enhance positional understanding and leverage transformer strengths. Inspired by the visual transformer (ViT) model, we propose partitioning graphs into patches and apply GNN models obtain fixed size vectors. Notably, our approach adopts contrastive learning for in-depth graph structure and incorporate more topological information via Ricci curvature to alleviate over-squashing problem by attenuating the effects of negatively curved edges while preserving the original graph structure. Unlike existing graph rewiring methods that directly modify graph structure by adding or removing edges, this approach is potentially more suitable for applications such as molecular learning where structural preservation is important. Our innovative pipeline subsequently introduces the PerformerMixer, a transformer variant with linear complexity, ensuring efficient computation. Evaluations on real-world benchmarks demonstrate our framework's superior performance, like Peptides-func and achieve 3-WL expressiveness.

A Rewiring Contrastive Patch PerformerMixer Framework for Graph Representation Learning / Sun, Z.; Harit, A.; Cristea, A. I.; Wang, J.; Lio, P.. - (2023), pp. 5930-5939. (Intervento presentato al convegno 2023 IEEE International Conference on Big Data, BigData 2023 tenutosi a Sorrento; ita) [10.1109/BigData59044.2023.10386951].

A Rewiring Contrastive Patch PerformerMixer Framework for Graph Representation Learning

Lio P.
2023

Abstract

Integrating transformers with graph representation learning has emerged as a research focal point. However, recent studies showed that positional encoding in Transformers does not capture enough structural information between nodes. Additionally, existing graph neural network (GNN) models face the oversquashing issue, impeding information retention from distant nodes. To address, we transform graphs into regular structures, such as tokens, to enhance positional understanding and leverage transformer strengths. Inspired by the visual transformer (ViT) model, we propose partitioning graphs into patches and apply GNN models obtain fixed size vectors. Notably, our approach adopts contrastive learning for in-depth graph structure and incorporate more topological information via Ricci curvature to alleviate over-squashing problem by attenuating the effects of negatively curved edges while preserving the original graph structure. Unlike existing graph rewiring methods that directly modify graph structure by adding or removing edges, this approach is potentially more suitable for applications such as molecular learning where structural preservation is important. Our innovative pipeline subsequently introduces the PerformerMixer, a transformer variant with linear complexity, ensuring efficient computation. Evaluations on real-world benchmarks demonstrate our framework's superior performance, like Peptides-func and achieve 3-WL expressiveness.
2023
2023 IEEE International Conference on Big Data, BigData 2023
Contrastive learning; Graph representation learning; Transformer
04 Pubblicazione in atti di convegno::04b Atto di convegno in volume
A Rewiring Contrastive Patch PerformerMixer Framework for Graph Representation Learning / Sun, Z.; Harit, A.; Cristea, A. I.; Wang, J.; Lio, P.. - (2023), pp. 5930-5939. (Intervento presentato al convegno 2023 IEEE International Conference on Big Data, BigData 2023 tenutosi a Sorrento; ita) [10.1109/BigData59044.2023.10386951].
File allegati a questo prodotto
File Dimensione Formato  
Sun_A-Rewiring_2023.pdf

solo gestori archivio

Tipologia: Versione editoriale (versione pubblicata con il layout dell'editore)
Licenza: Tutti i diritti riservati (All rights reserved)
Dimensione 627.07 kB
Formato Adobe PDF
627.07 kB Adobe PDF   Contatta l'autore

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11573/1726847
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 0
  • ???jsp.display-item.citation.isi??? ND
social impact