Neural networks are commonly defined as ‘black-box’ models, meaning that the mechanism describing how they give predictions and perform decisions is not immediately clear or even understandable by humans. Therefore, Explainable Artificial Intelligence (xAI) aims at overcoming such limitation by providing explanations to Machine Learning (ML) algorithms and, consequently, making their outcomes reliable for users. However, different xAI methods may provide different explanations, both from a quantitative and a qualitative point of view, and the heterogeneity of approaches makes it difficult for a domain expert to select and interpret their result. In this work, we consider this issue in the context of a high-energy physics (HEP) use-case concerning muonic motion. In particular, we explored an array of xAI methods based on different approaches, and we tested their capabilities in our use-case. As a result, we obtained an array of potentially easy-to-understand and human-readable explanations of models’ predictions, and for each of them we describe strengths and drawbacks in this particular scenario, providing an interesting atlas on the convergent application of multiple xAI algorithms in a realistic context.

Convergent Approaches to AI Explainability for HEP Muonic Particles Pattern Recognition / Maglianella, L.; Nicoletti, L.; Giagu, S.; Napoli, C.; Scardapane, S.. - In: COMPUTING AND SOFTWARE FOR BIG SCIENCE. - ISSN 2510-2044. - 7:1(2023), pp. 1-18. [10.1007/s41781-023-00102-z]

Convergent Approaches to AI Explainability for HEP Muonic Particles Pattern Recognition

Maglianella L.;Giagu S.;Napoli C.;Scardapane S.
2023

Abstract

Neural networks are commonly defined as ‘black-box’ models, meaning that the mechanism describing how they give predictions and perform decisions is not immediately clear or even understandable by humans. Therefore, Explainable Artificial Intelligence (xAI) aims at overcoming such limitation by providing explanations to Machine Learning (ML) algorithms and, consequently, making their outcomes reliable for users. However, different xAI methods may provide different explanations, both from a quantitative and a qualitative point of view, and the heterogeneity of approaches makes it difficult for a domain expert to select and interpret their result. In this work, we consider this issue in the context of a high-energy physics (HEP) use-case concerning muonic motion. In particular, we explored an array of xAI methods based on different approaches, and we tested their capabilities in our use-case. As a result, we obtained an array of potentially easy-to-understand and human-readable explanations of models’ predictions, and for each of them we describe strengths and drawbacks in this particular scenario, providing an interesting atlas on the convergent application of multiple xAI algorithms in a realistic context.
2023
explainable artificial intelligence; high-energy physics; intrinsically interpretable decision trees; saliency maps methods; tracing gradient descent
01 Pubblicazione su rivista::01a Articolo in rivista
Convergent Approaches to AI Explainability for HEP Muonic Particles Pattern Recognition / Maglianella, L.; Nicoletti, L.; Giagu, S.; Napoli, C.; Scardapane, S.. - In: COMPUTING AND SOFTWARE FOR BIG SCIENCE. - ISSN 2510-2044. - 7:1(2023), pp. 1-18. [10.1007/s41781-023-00102-z]
File allegati a questo prodotto
File Dimensione Formato  
Maglianella_Convergent_2023.pdf

accesso aperto

Tipologia: Versione editoriale (versione pubblicata con il layout dell'editore)
Licenza: Creative commons
Dimensione 5.32 MB
Formato Adobe PDF
5.32 MB Adobe PDF

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11573/1686452
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 0
  • ???jsp.display-item.citation.isi??? ND
social impact