In recent years Artificial Intelligence has emerged as a fundamental tool in medical applications. Despite this rapid develop- ment, deep neural networks remain black boxes that are difficult to explain, and this represents a major limitation for their use in clinical practice. In this paper we focus on the task of segmenting medical images, where most explainability methods proposed so far provide a visual explanation in terms of an input saliency map. The aim of this work is to extend, implement and test an alternative influence-based explainability algorithm (TracIn), proposed originally for classification tasks, to the challenging clinical problem of multiclass segmentation of tumor brains in multimodal magnetic resonance imaging. We verify the faithfulness of the proposed algorithm in linking the similarities of the latent representation of the network to the TracIn output. We further test the capacity of the algorithm to provide local and global explanations, and we suggest that it can be adopted as a tool to select the most relevant features used in the decision process. The method is generalizable for all semantic segmentation tasks where classes are mutually exclusive, which is the standard framework in these cases.

Influence based explainability of brain tumors segmentation in magnetic resonance imaging / Torda, Tommaso; Ciardiello, Andrea; Gargiulo, Simona; Grillo, Greta; Scardapane, Simone; Voena, Cecilia; Giagu, Stefano. - In: PROGRESS IN ARTIFICIAL INTELLIGENCE. - ISSN 2192-6352. - (2025). [10.1007/s13748-025-00367-y]

Influence based explainability of brain tumors segmentation in magnetic resonance imaging

Torda, Tommaso;Ciardiello, Andrea;Gargiulo, Simona;Scardapane, Simone;Voena, Cecilia
;
Giagu, Stefano
2025

Abstract

In recent years Artificial Intelligence has emerged as a fundamental tool in medical applications. Despite this rapid develop- ment, deep neural networks remain black boxes that are difficult to explain, and this represents a major limitation for their use in clinical practice. In this paper we focus on the task of segmenting medical images, where most explainability methods proposed so far provide a visual explanation in terms of an input saliency map. The aim of this work is to extend, implement and test an alternative influence-based explainability algorithm (TracIn), proposed originally for classification tasks, to the challenging clinical problem of multiclass segmentation of tumor brains in multimodal magnetic resonance imaging. We verify the faithfulness of the proposed algorithm in linking the similarities of the latent representation of the network to the TracIn output. We further test the capacity of the algorithm to provide local and global explanations, and we suggest that it can be adopted as a tool to select the most relevant features used in the decision process. The method is generalizable for all semantic segmentation tasks where classes are mutually exclusive, which is the standard framework in these cases.
2025
artificial intelligence; explainability; deep learning; healthcare; brain tumors; segmentation
01 Pubblicazione su rivista::01a Articolo in rivista
Influence based explainability of brain tumors segmentation in magnetic resonance imaging / Torda, Tommaso; Ciardiello, Andrea; Gargiulo, Simona; Grillo, Greta; Scardapane, Simone; Voena, Cecilia; Giagu, Stefano. - In: PROGRESS IN ARTIFICIAL INTELLIGENCE. - ISSN 2192-6352. - (2025). [10.1007/s13748-025-00367-y]
File allegati a questo prodotto
File Dimensione Formato  
Torda_Influence_2025.pdf

accesso aperto

Tipologia: Documento in Post-print (versione successiva alla peer review e accettata per la pubblicazione)
Licenza: Creative commons
Dimensione 2.68 MB
Formato Adobe PDF
2.68 MB Adobe PDF

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11573/1735858
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus ND
  • ???jsp.display-item.citation.isi??? ND
social impact