In recent years, Graph Neural Networks have reported outstanding performances in tasks like community detection, molecule classification and link prediction. However, the black-box nature of these models prevents their application in domains like health and finance, where understanding the model’s decisions is essential. Explainable AI, or Explainable Machine Learning, is artificial intelligence in which humans can understand the decisions or predictions made by the AI. A special case is the Counterfactual examples which provide suggestions on the steps the system needs to take to change its decision. Historically ensemble learning and explainability have been jointly exploited to explain the decision of ensemble models. Contrarily, in this work, we focus on the ensemble mechanisms of the explainers to improve the quality of explanations. In this work, we explore, thus, which are the possible ensemble mechanism that can be adopted in several explainability scenarios. Furthermore, we introduce and discuss a new explainability problem where a single coherent counterfactual explanation must be provided for a set of input instances and their explanations.

Ensemble approaches for Graph Counterfactual Explanations / Prado-Romero, M. A.; Prenkaj, B.; Stilo, G.; Celi, A.; Estevanell-Valladares, E.; Valdes-Perez, D. A.. - 3277:(2022), pp. 88-97. (Intervento presentato al convegno 3rd Italian Workshop on Explainable Artificial Intelligence, XAI.it 2022 tenutosi a Udine, Italia).

Ensemble approaches for Graph Counterfactual Explanations

Prenkaj B.
Secondo
Formal Analysis
;
Stilo G.
Formal Analysis
;
2022

Abstract

In recent years, Graph Neural Networks have reported outstanding performances in tasks like community detection, molecule classification and link prediction. However, the black-box nature of these models prevents their application in domains like health and finance, where understanding the model’s decisions is essential. Explainable AI, or Explainable Machine Learning, is artificial intelligence in which humans can understand the decisions or predictions made by the AI. A special case is the Counterfactual examples which provide suggestions on the steps the system needs to take to change its decision. Historically ensemble learning and explainability have been jointly exploited to explain the decision of ensemble models. Contrarily, in this work, we focus on the ensemble mechanisms of the explainers to improve the quality of explanations. In this work, we explore, thus, which are the possible ensemble mechanism that can be adopted in several explainability scenarios. Furthermore, we introduce and discuss a new explainability problem where a single coherent counterfactual explanation must be provided for a set of input instances and their explanations.
2022
3rd Italian Workshop on Explainable Artificial Intelligence, XAI.it 2022
Counterfactual Explanations; Ensemble; Explainable AI; Machine Learning
04 Pubblicazione in atti di convegno::04b Atto di convegno in volume
Ensemble approaches for Graph Counterfactual Explanations / Prado-Romero, M. A.; Prenkaj, B.; Stilo, G.; Celi, A.; Estevanell-Valladares, E.; Valdes-Perez, D. A.. - 3277:(2022), pp. 88-97. (Intervento presentato al convegno 3rd Italian Workshop on Explainable Artificial Intelligence, XAI.it 2022 tenutosi a Udine, Italia).
File allegati a questo prodotto
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11573/1723517
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo

Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 1
  • ???jsp.display-item.citation.isi??? ND
social impact