Existing methods for interpreting predictions from Graph Neural Networks (GNNs) have primarily focused on generating subgraphs that are especially relevant for a particular prediction. However, such methods do not provide a clear opportunity for recourse: given a prediction, we want to understand how the prediction can be changed in order to achieve a more desirable outcome. In this work, we propose a method for generating counterfac- tual (CF) explanations for GNNs: the mini- mal perturbation to the input (graph) data such that the prediction changes. Using only edge deletions, we find that our method, CF- GNNExplainer, can generate CF explana- tions for the majority of instances across three widely used datasets for GNN explanations, while removing less than 3 edges on average, with at least 94% accuracy. This indicates that CF-GNNExplainer primarily removes edges that are crucial for the original predic- tions, resulting in minimal CF explanations.
CF-GNNExplainer: Counterfactual Explanations for Graph Neural Networks / Lucic, Ana; ter Hoeve, Maartje; Tolomei, Gabriele; de Rijke, Maarten; Silvestri, Fabrizio. - 151:(2022), pp. 4499-4511. ( International Conference on Artificial Intelligence and Statistics Virtual Conference ).
CF-GNNExplainer: Counterfactual Explanations for Graph Neural Networks
Gabriele Tolomei
Writing – Original Draft Preparation
;Fabrizio Silvestri
Writing – Original Draft Preparation
2022
Abstract
Existing methods for interpreting predictions from Graph Neural Networks (GNNs) have primarily focused on generating subgraphs that are especially relevant for a particular prediction. However, such methods do not provide a clear opportunity for recourse: given a prediction, we want to understand how the prediction can be changed in order to achieve a more desirable outcome. In this work, we propose a method for generating counterfac- tual (CF) explanations for GNNs: the mini- mal perturbation to the input (graph) data such that the prediction changes. Using only edge deletions, we find that our method, CF- GNNExplainer, can generate CF explana- tions for the majority of instances across three widely used datasets for GNN explanations, while removing less than 3 edges on average, with at least 94% accuracy. This indicates that CF-GNNExplainer primarily removes edges that are crucial for the original predic- tions, resulting in minimal CF explanations.| File | Dimensione | Formato | |
|---|---|---|---|
|
Lucic_CF-GNNExplainer_2022.pdf
accesso aperto
Note: https://proceedings.mlr.press/v151/lucic22a/lucic22a.pdf
Tipologia:
Versione editoriale (versione pubblicata con il layout dell'editore)
Licenza:
Tutti i diritti riservati (All rights reserved)
Dimensione
402.61 kB
Formato
Adobe PDF
|
402.61 kB | Adobe PDF |
I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.


