Explainable artificial intelligence has rapidly emerged since lawmakers have started requiring interpretable models for safety-critical domains. Concept-based neural networks have arisen as explainable-by-design methods as they leverage human-understandable symbols (i.e. concepts) to predict class memberships. However, most of these approaches focus on the identification of the most relevant concepts but do not provide concise, formal explanations of how such concepts are leveraged by the classifier to make predictions. In this paper, we propose a novel end-to-end differentiable approach enabling the extraction of logic explanations from neural networks using the formalism of First-Order Logic. The method relies on an entropy-based criterion which automatically identifies the most relevant concepts. We consider four different case studies to demonstrate that: (i) this entropy-based criterion enables the distillation of concise logic explanations in safety-critical domains from clinical data to computer vision; (ii) the proposed approach outperforms state-of-the-art white-box models in terms of classification accuracy and matches black box performances.

Entropy-Based Logic Explanations of Neural Networks / Barbiero, P.; Ciravegna, G.; Giannini, F.; Lio, P.; Gori, M.; Melacci, S.. - 36:(2022), pp. 6046-6054. (Intervento presentato al convegno 36th AAAI Conference on Artificial Intelligence, AAAI 2022 tenutosi a Virtual, Online) [10.1609/aaai.v36i6.20551].

Entropy-Based Logic Explanations of Neural Networks

Lio P.;
2022

Abstract

Explainable artificial intelligence has rapidly emerged since lawmakers have started requiring interpretable models for safety-critical domains. Concept-based neural networks have arisen as explainable-by-design methods as they leverage human-understandable symbols (i.e. concepts) to predict class memberships. However, most of these approaches focus on the identification of the most relevant concepts but do not provide concise, formal explanations of how such concepts are leveraged by the classifier to make predictions. In this paper, we propose a novel end-to-end differentiable approach enabling the extraction of logic explanations from neural networks using the formalism of First-Order Logic. The method relies on an entropy-based criterion which automatically identifies the most relevant concepts. We consider four different case studies to demonstrate that: (i) this entropy-based criterion enables the distillation of concise logic explanations in safety-critical domains from clinical data to computer vision; (ii) the proposed approach outperforms state-of-the-art white-box models in terms of classification accuracy and matches black box performances.
2022
36th AAAI Conference on Artificial Intelligence, AAAI 2022
Artificial intelligence; Computer circuits; Distillation; Formal logic; Safety engineering
04 Pubblicazione in atti di convegno::04b Atto di convegno in volume
Entropy-Based Logic Explanations of Neural Networks / Barbiero, P.; Ciravegna, G.; Giannini, F.; Lio, P.; Gori, M.; Melacci, S.. - 36:(2022), pp. 6046-6054. (Intervento presentato al convegno 36th AAAI Conference on Artificial Intelligence, AAAI 2022 tenutosi a Virtual, Online) [10.1609/aaai.v36i6.20551].
File allegati a questo prodotto
File Dimensione Formato  
Barbiero_Entropy-Based_2022.pdf

accesso aperto

Note: https://ojs.aaai.org/index.php/AAAI/article/download/20551/20310
Tipologia: Versione editoriale (versione pubblicata con il layout dell'editore)
Licenza: Creative commons
Dimensione 515.72 kB
Formato Adobe PDF
515.72 kB Adobe PDF

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11573/1721261
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 40
  • ???jsp.display-item.citation.isi??? 19
social impact