Recent research on graph neural network (GNN) models successfully applied GNNs to classical graph algorithms and combinatorial optimisation problems. This has numerous benefits, such as allowing applications of algorithms when preconditions are not satisfied, or reusing learned models when sufficient training data is not available or can’t be generated. Unfortunately, a key hindrance of these approaches is their lack of explainability, since GNNs are black-box models that cannot be interpreted directly. In this work, we address this limitation by applying existing work on concept-based explanations to GNN models. We introduce concept-bottleneck GNNs, which rely on a modification to the GNN readout mechanism. Using three case studies we demonstrate that: (i) our proposed model is capable of accurately learning concepts and extracting propositional formulas based on the learned concepts for each target class; (ii) our concept-based GNN models achieve comparative performance with state-of-the-art models; (iii) we can derive global graph concepts, without explicitly providing any supervision on graph-level concepts.

Algorithmic Concept-Based Explainable Reasoning / Georgiev, D.; Barbiero, P.; Kazhdan, D.; Velickovic, P.; Lio, P.. - 36:(2022), pp. 6685-6693. (Intervento presentato al convegno National Conference of the American Association for Artificial Intelligence tenutosi a Virtual, Online22 February 2022through 1 March 2022).

Algorithmic Concept-Based Explainable Reasoning

Lio P.
2022

Abstract

Recent research on graph neural network (GNN) models successfully applied GNNs to classical graph algorithms and combinatorial optimisation problems. This has numerous benefits, such as allowing applications of algorithms when preconditions are not satisfied, or reusing learned models when sufficient training data is not available or can’t be generated. Unfortunately, a key hindrance of these approaches is their lack of explainability, since GNNs are black-box models that cannot be interpreted directly. In this work, we address this limitation by applying existing work on concept-based explanations to GNN models. We introduce concept-bottleneck GNNs, which rely on a modification to the GNN readout mechanism. Using three case studies we demonstrate that: (i) our proposed model is capable of accurately learning concepts and extracting propositional formulas based on the learned concepts for each target class; (ii) our concept-based GNN models achieve comparative performance with state-of-the-art models; (iii) we can derive global graph concepts, without explicitly providing any supervision on graph-level concepts.
2022
National Conference of the American Association for Artificial Intelligence
Combinatorial optimization; Neural network models
04 Pubblicazione in atti di convegno::04b Atto di convegno in volume
Algorithmic Concept-Based Explainable Reasoning / Georgiev, D.; Barbiero, P.; Kazhdan, D.; Velickovic, P.; Lio, P.. - 36:(2022), pp. 6685-6693. (Intervento presentato al convegno National Conference of the American Association for Artificial Intelligence tenutosi a Virtual, Online22 February 2022through 1 March 2022).
File allegati a questo prodotto
File Dimensione Formato  
Georgiev_Algorithmic_2022.pdf

solo gestori archivio

Tipologia: Versione editoriale (versione pubblicata con il layout dell'editore)
Licenza: Tutti i diritti riservati (All rights reserved)
Dimensione 7.24 MB
Formato Adobe PDF
7.24 MB Adobe PDF   Contatta l'autore

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11573/1727358
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 10
  • ???jsp.display-item.citation.isi??? 6
social impact