Providing neural networks with the ability to learn new tasks sequentially represents one of the main challenges in artificial intelligence. Unlike humans, neural networks are prone to losing previously acquired knowledge upon learning new information, a phenomenon known as catastrophic forgetting. Continual learning proposes diverse solutions to mitigate this problem, but only a few leverage explainable artificial intelligence. This work justifies using explainability techniques in continual learning, emphasizing the need for greater transparency and trustworthiness in these systems and grounding our approach in empirical findings from neuroscience that highlight parallels between forgetting in biological and artificial neural networks. Finally, we review existing work applying explainability methods to address catastrophic forgetting and propose potential avenues for future research. This article is categorized under: Fundamental Concepts of Data and Knowledge > Explainable AI Technologies > Artificial Intelligence Technologies > Cognitive Computing.

XAI-Guided Continual Learning: Rationale, Methods, and Future Directions / Proietti, M.; Ragno, A.; Capobianco, R.. - In: WILEY INTERDISCIPLINARY REVIEWS. DATA MINING AND KNOWLEDGE DISCOVERY. - ISSN 1942-4787. - 15:4(2025). [10.1002/widm.70046]

XAI-Guided Continual Learning: Rationale, Methods, and Future Directions

Proietti M.
Primo
;
Ragno A.
Secondo
;
Capobianco R.
Ultimo
2025

Abstract

Providing neural networks with the ability to learn new tasks sequentially represents one of the main challenges in artificial intelligence. Unlike humans, neural networks are prone to losing previously acquired knowledge upon learning new information, a phenomenon known as catastrophic forgetting. Continual learning proposes diverse solutions to mitigate this problem, but only a few leverage explainable artificial intelligence. This work justifies using explainability techniques in continual learning, emphasizing the need for greater transparency and trustworthiness in these systems and grounding our approach in empirical findings from neuroscience that highlight parallels between forgetting in biological and artificial neural networks. Finally, we review existing work applying explainability methods to address catastrophic forgetting and propose potential avenues for future research. This article is categorized under: Fundamental Concepts of Data and Knowledge > Explainable AI Technologies > Artificial Intelligence Technologies > Cognitive Computing.
2025
continual learning; explainable artificial intelligence; explanation guided learning
01 Pubblicazione su rivista::01a Articolo in rivista
XAI-Guided Continual Learning: Rationale, Methods, and Future Directions / Proietti, M.; Ragno, A.; Capobianco, R.. - In: WILEY INTERDISCIPLINARY REVIEWS. DATA MINING AND KNOWLEDGE DISCOVERY. - ISSN 1942-4787. - 15:4(2025). [10.1002/widm.70046]
File allegati a questo prodotto
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11573/1752546
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo

Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 0
  • ???jsp.display-item.citation.isi??? ND
social impact