Graph convolutional networks (GCNs) are a family ofneural network models that perform inference on graph data byinterleaving vertexwise operations and message-passing exchanges acrossnodes. Concerning the latter, two key questions arise: 1) how to design adifferentiable exchange protocol (e.g., a one-hop Laplacian smoothing inthe original GCN) and 2) how to characterize the tradeoff in complexitywith respect to the local updates. In this brief, we show that the state-of-the-art results can be achieved by adapting the number of communicationsteps independently at every node. In particular, we endow each node witha halting unit (inspired by Graves’ adaptive computation time [1]) thatafter every exchange decides whether to continue communicating or not.We show that the proposed adaptive propagation GCN (AP-GCN)achieves superior or similar results to the best proposed models so faron a number of benchmarks while requiring a small overhead in termsof additional parameters. We also investigate a regularization term toenforce an explicit tradeoff between communication and accuracy. Thecode for the AP-GCN experiments is released as an open-source library.

Adaptive propagation graph convolutional network / Spinelli, I; Scardapane, S; Uncini, A. - In: IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS. - ISSN 2162-2388. - (2020), pp. 1-6. [10.1109/TNNLS.2020.3025110]

Adaptive propagation graph convolutional network

Spinelli I;Scardapane S;Uncini A
2020

Abstract

Graph convolutional networks (GCNs) are a family ofneural network models that perform inference on graph data byinterleaving vertexwise operations and message-passing exchanges acrossnodes. Concerning the latter, two key questions arise: 1) how to design adifferentiable exchange protocol (e.g., a one-hop Laplacian smoothing inthe original GCN) and 2) how to characterize the tradeoff in complexitywith respect to the local updates. In this brief, we show that the state-of-the-art results can be achieved by adapting the number of communicationsteps independently at every node. In particular, we endow each node witha halting unit (inspired by Graves’ adaptive computation time [1]) thatafter every exchange decides whether to continue communicating or not.We show that the proposed adaptive propagation GCN (AP-GCN)achieves superior or similar results to the best proposed models so faron a number of benchmarks while requiring a small overhead in termsof additional parameters. We also investigate a regularization term toenforce an explicit tradeoff between communication and accuracy. Thecode for the AP-GCN experiments is released as an open-source library.
2020
convolutional network; graph data; graphneural network (GNN); node classification
01 Pubblicazione su rivista::01a Articolo in rivista
Adaptive propagation graph convolutional network / Spinelli, I; Scardapane, S; Uncini, A. - In: IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS. - ISSN 2162-2388. - (2020), pp. 1-6. [10.1109/TNNLS.2020.3025110]
File allegati a questo prodotto
File Dimensione Formato  
Spinelli_post-print_Adaptive_2020.pdf

solo gestori archivio

Tipologia: Documento in Post-print (versione successiva alla peer review e accettata per la pubblicazione)
Licenza: Tutti i diritti riservati (All rights reserved)
Dimensione 1.14 MB
Formato Adobe PDF
1.14 MB Adobe PDF   Contatta l'autore

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11573/1486183
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 37
  • ???jsp.display-item.citation.isi??? 35
social impact