Common wisdom in the graph neural network (GNN) community dictates that anisotropic models-in which messages sent between nodes are a function of both the source and target node-are required to achieve state-of-the-art performance. Benchmarks to date have demonstrated that these models perform better than comparable isotropic models-where messages are a function of the source node only. In this work we provide empirical evidence challenging this narrative: we propose an isotropic GNN, which we call Efficient Graph Convolution (EGC), that consistently outperforms comparable anisotropic models, including the popular GAT or PNA architectures by using spatially-varying adaptive filters. In addition to raising important questions for the GNN community, our work has significant real-world implications for efficiency. EGC achieves higher model accuracy, with lower memory consumption and latency, along with characteristics suited to accelerator implementation, while being a drop-in replacement for existing architectures. As an isotropic model, it requires memory proportional to the number of vertices in the graph (O(V )); in contrast, anisotropic models require memory proportional to the number of edges (O(E)). We demonstrate that EGC outperforms existing approaches across 6 large and diverse benchmark datasets, and conclude by discussing questions that our work raise for the community going forward. Code and pretrained models for our experiments are provided at https://github.com/shyam196/egc.

Do We Need Anisotropic Graph Neural Networks? / Tailor, S. A.; Opolka, F. L.; Lio, P.; Lane, N. D.. - (2022). (Intervento presentato al convegno International Conference on Learning Representations tenutosi a Virtual, Online).

Do We Need Anisotropic Graph Neural Networks?

Lio P.;
2022

Abstract

Common wisdom in the graph neural network (GNN) community dictates that anisotropic models-in which messages sent between nodes are a function of both the source and target node-are required to achieve state-of-the-art performance. Benchmarks to date have demonstrated that these models perform better than comparable isotropic models-where messages are a function of the source node only. In this work we provide empirical evidence challenging this narrative: we propose an isotropic GNN, which we call Efficient Graph Convolution (EGC), that consistently outperforms comparable anisotropic models, including the popular GAT or PNA architectures by using spatially-varying adaptive filters. In addition to raising important questions for the GNN community, our work has significant real-world implications for efficiency. EGC achieves higher model accuracy, with lower memory consumption and latency, along with characteristics suited to accelerator implementation, while being a drop-in replacement for existing architectures. As an isotropic model, it requires memory proportional to the number of vertices in the graph (O(V )); in contrast, anisotropic models require memory proportional to the number of edges (O(E)). We demonstrate that EGC outperforms existing approaches across 6 large and diverse benchmark datasets, and conclude by discussing questions that our work raise for the community going forward. Code and pretrained models for our experiments are provided at https://github.com/shyam196/egc.
2022
International Conference on Learning Representations
Adaptive filtering; Anisotropy; Graph neural networks; Large dataset; Memory architecture; Network architecture
04 Pubblicazione in atti di convegno::04b Atto di convegno in volume
Do We Need Anisotropic Graph Neural Networks? / Tailor, S. A.; Opolka, F. L.; Lio, P.; Lane, N. D.. - (2022). (Intervento presentato al convegno International Conference on Learning Representations tenutosi a Virtual, Online).
File allegati a questo prodotto
File Dimensione Formato  
Tailor_Do-we-need_2022.pdf

accesso aperto

Note: https://openreview.net/pdf?id=hl9ePdHO4_s
Tipologia: Versione editoriale (versione pubblicata con il layout dell'editore)
Licenza: Creative commons
Dimensione 426.74 kB
Formato Adobe PDF
426.74 kB Adobe PDF

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11573/1721185
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 10
  • ???jsp.display-item.citation.isi??? ND
social impact