The opaque reasoning of Graph Neural Networks induces a lack of human trust. Existing graph network explainers attempt to address this issue by providing post-hoc explanations, however, they fail to make the model itself more interpretable. To fill this gap, we intro- duce the Concept Distillation Module, the first differentiable concept- distillation approach for graph networks. The proposed approach is a layer that can be plugged into any graph network to make it explainable by design, by first distilling graph concepts from the latent space and then using these to solve the task. Our results demonstrate that this approach allows graph networks to: (i) attain model accuracy comparable with their equivalent vanilla versions, (ii) distill meaningful concepts achiev- ing 4.8% higher concept completeness and 36.5% lower purity scores on average, (iii) provide high-quality concept-based logic explanations for their prediction, and (iv) support effective interventions at test time: these can increase human trust as well as improve model performance.

Concept Distillation in Graph Neural Networks / Charlotte Magister, Lucie; Barbiero, Pietro; Kazhdan, Dmitry; Siciliano, Federico; Ciravegna, Gabriele; Silvestri, Fabrizio; Jamnik, Mateja; Liò, Pietro. - 1903:(2023), pp. 233-255. (Intervento presentato al convegno 1st World Conference on eXplainable Artificial Intelligence, xAI 2023 tenutosi a Lisbon, Portugal) [10.1007/978-3-031-44070-0_12].

Concept Distillation in Graph Neural Networks

Federico Siciliano
;
Fabrizio Silvestri;Pietro Liò
2023

Abstract

The opaque reasoning of Graph Neural Networks induces a lack of human trust. Existing graph network explainers attempt to address this issue by providing post-hoc explanations, however, they fail to make the model itself more interpretable. To fill this gap, we intro- duce the Concept Distillation Module, the first differentiable concept- distillation approach for graph networks. The proposed approach is a layer that can be plugged into any graph network to make it explainable by design, by first distilling graph concepts from the latent space and then using these to solve the task. Our results demonstrate that this approach allows graph networks to: (i) attain model accuracy comparable with their equivalent vanilla versions, (ii) distill meaningful concepts achiev- ing 4.8% higher concept completeness and 36.5% lower purity scores on average, (iii) provide high-quality concept-based logic explanations for their prediction, and (iv) support effective interventions at test time: these can increase human trust as well as improve model performance.
2023
1st World Conference on eXplainable Artificial Intelligence, xAI 2023
Explainability; Concepts; Graph Neural Networks
04 Pubblicazione in atti di convegno::04b Atto di convegno in volume
Concept Distillation in Graph Neural Networks / Charlotte Magister, Lucie; Barbiero, Pietro; Kazhdan, Dmitry; Siciliano, Federico; Ciravegna, Gabriele; Silvestri, Fabrizio; Jamnik, Mateja; Liò, Pietro. - 1903:(2023), pp. 233-255. (Intervento presentato al convegno 1st World Conference on eXplainable Artificial Intelligence, xAI 2023 tenutosi a Lisbon, Portugal) [10.1007/978-3-031-44070-0_12].
File allegati a questo prodotto
File Dimensione Formato  
Magister_pretprint_Concecpt_2023.pdf.pdf

accesso aperto

Note: https://link.springer.com/chapter/10.1007/978-3-031-44070-0_12
Tipologia: Documento in Pre-print (manoscritto inviato all'editore, precedente alla peer review)
Licenza: Tutti i diritti riservati (All rights reserved)
Dimensione 2.7 MB
Formato Adobe PDF
2.7 MB Adobe PDF
Magister_Concecpt_2023.pdf

solo gestori archivio

Tipologia: Versione editoriale (versione pubblicata con il layout dell'editore)
Licenza: Tutti i diritti riservati (All rights reserved)
Dimensione 22.99 MB
Formato Adobe PDF
22.99 MB Adobe PDF   Contatta l'autore

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11573/1700473
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 7
  • ???jsp.display-item.citation.isi??? 3
social impact