Graph neural networks (GNNs) are highly effective on a variety of graph-related tasks; however, they lack interpretability and transparency. Current explainability approaches are typically local and treat GNNs as black-boxes. They do not look inside the model, inhibiting human trust in the model and explanations. Motivated by the ability of neurons to detect high-level semantic concepts in vision models, we perform a novel analysis on the behaviour of individual GNN neurons to answer questions about GNN interpretability. We propose a novel approach for producing global explanations for GNNs using neuron-level concepts to enable practitioners to have a high-level view of the model. Specifically, (i) to the best of our knowledge, this is the first work which shows that GNN neurons act as concept detectors and have strong alignment with concepts formulated as logical compositions of node degree and neighbourhood properties; (ii) we quantitatively assess the importance of detected concepts, and identify a trade-off between training duration and neuron-level interpretability; (iii) we demonstrate that our global explainability approach has advantages over the current state-of-the-art – we can disentangle the explanation into individual interpretable concepts backed by logical descriptions, which reduces potential for bias and improves user-friendliness.

Global Concept-Based Interpretability for Graph Neural Networks via Neuron Analysis / Xuanyuan, H.; Barbiero, P.; Georgiev, D.; Magister, L. C.; Lio, P.. - 37:(2023), pp. 10675-10683. (Intervento presentato al convegno 37th AAAI Conference on Artificial Intelligence, AAAI 2023 tenutosi a Washington; usa) [10.1609/aaai.v37i9.26267].

Global Concept-Based Interpretability for Graph Neural Networks via Neuron Analysis

Lio P.
2023

Abstract

Graph neural networks (GNNs) are highly effective on a variety of graph-related tasks; however, they lack interpretability and transparency. Current explainability approaches are typically local and treat GNNs as black-boxes. They do not look inside the model, inhibiting human trust in the model and explanations. Motivated by the ability of neurons to detect high-level semantic concepts in vision models, we perform a novel analysis on the behaviour of individual GNN neurons to answer questions about GNN interpretability. We propose a novel approach for producing global explanations for GNNs using neuron-level concepts to enable practitioners to have a high-level view of the model. Specifically, (i) to the best of our knowledge, this is the first work which shows that GNN neurons act as concept detectors and have strong alignment with concepts formulated as logical compositions of node degree and neighbourhood properties; (ii) we quantitatively assess the importance of detected concepts, and identify a trade-off between training duration and neuron-level interpretability; (iii) we demonstrate that our global explainability approach has advantages over the current state-of-the-art – we can disentangle the explanation into individual interpretable concepts backed by logical descriptions, which reduces potential for bias and improves user-friendliness.
2023
37th AAAI Conference on Artificial Intelligence, AAAI 2023
Economic and social effects; Graph neural networks; Semantics
04 Pubblicazione in atti di convegno::04b Atto di convegno in volume
Global Concept-Based Interpretability for Graph Neural Networks via Neuron Analysis / Xuanyuan, H.; Barbiero, P.; Georgiev, D.; Magister, L. C.; Lio, P.. - 37:(2023), pp. 10675-10683. (Intervento presentato al convegno 37th AAAI Conference on Artificial Intelligence, AAAI 2023 tenutosi a Washington; usa) [10.1609/aaai.v37i9.26267].
File allegati a questo prodotto
File Dimensione Formato  
Xuanyuan_Global-Concept-Based_2023.pdf

solo gestori archivio

Tipologia: Versione editoriale (versione pubblicata con il layout dell'editore)
Licenza: Tutti i diritti riservati (All rights reserved)
Dimensione 1.98 MB
Formato Adobe PDF
1.98 MB Adobe PDF   Contatta l'autore
Xuanyuan_preprint_Global-Concept-Based_2023.pdf

accesso aperto

Note: DOI: https://doi.org/10.1609/aaai.v37i9.26267
Tipologia: Documento in Pre-print (manoscritto inviato all'editore, precedente alla peer review)
Licenza: Creative commons
Dimensione 3.04 MB
Formato Adobe PDF
3.04 MB Adobe PDF

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11573/1725235
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 10
  • ???jsp.display-item.citation.isi??? 7
social impact