Providing neural networks with the ability to learn new tasks sequentially represents one of the main challenges in artificial intelligence. Unlike humans, neural networks are prone to losing previously acquired knowledge upon learning new information, a phenomenon known as catastrophic forgetting. Continual learning proposes diverse solutions to mitigate this problem, but only a few leverage explainable artificial intelligence. This work justifies using explainability techniques in continual learning, emphasizing the need for greater transparency and trustworthiness in these systems and grounding our approach in empirical findings from neuroscience that highlight parallels between forgetting in biological and artificial neural networks. Finally, we review existing work applying explainability methods to address catastrophic forgetting and propose potential avenues for future research. This article is categorized under: Fundamental Concepts of Data and Knowledge > Explainable AI Technologies > Artificial Intelligence Technologies > Cognitive Computing.
XAI-Guided Continual Learning: Rationale, Methods, and Future Directions / Proietti, M.; Ragno, A.; Capobianco, R.. - In: WILEY INTERDISCIPLINARY REVIEWS. DATA MINING AND KNOWLEDGE DISCOVERY. - ISSN 1942-4787. - 15:4(2025). [10.1002/widm.70046]
XAI-Guided Continual Learning: Rationale, Methods, and Future Directions
Proietti M.
Primo
;Ragno A.Secondo
;Capobianco R.Ultimo
2025
Abstract
Providing neural networks with the ability to learn new tasks sequentially represents one of the main challenges in artificial intelligence. Unlike humans, neural networks are prone to losing previously acquired knowledge upon learning new information, a phenomenon known as catastrophic forgetting. Continual learning proposes diverse solutions to mitigate this problem, but only a few leverage explainable artificial intelligence. This work justifies using explainability techniques in continual learning, emphasizing the need for greater transparency and trustworthiness in these systems and grounding our approach in empirical findings from neuroscience that highlight parallels between forgetting in biological and artificial neural networks. Finally, we review existing work applying explainability methods to address catastrophic forgetting and propose potential avenues for future research. This article is categorized under: Fundamental Concepts of Data and Knowledge > Explainable AI Technologies > Artificial Intelligence Technologies > Cognitive Computing.I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.


