Deep learning has revolutionized performance across diverse applications but is often criticized for its opaque nature and computational complexity, complicating efforts to understand and trust model predictions. Explainability techniques have emerged to address these issues, enhancing model transparency and user trust. Simultaneously, efficiency techniques such as Knowledge Distillation offer solutions to streamline complex (teacher) models into more efficient (student) ones while preserving their predictive capabilities. Building on those research efforts, this paper presents a general framework that is able to improve well-known distillation techniques through explainability methods boosting both model interpretability and accuracy of shallow student architectures. More in detail, by incorporating a novel loss function that aligns the explainability maps of teacher and student models, our method refines the distillation process, leading to more accurate and interpretable predictions. Our approach not only improves the performance of distilled models but also demonstrates the effective integration of explainable insights into distillation frameworks, as validated by multiple experiments on a publicly available benchmark dataset.

Explaining the Inexplicable: An Explainable AI Approach to Boost Knowledge Transfer / Conforti, Pietro; Papa, Lorenzo; Amerini, Irene; Russo, Paolo. - (2026), pp. 660-670. ( International Conference on Computer Vision Theory and Applications (VISAPP 2026) Marbella, Spain ) [10.5220/0014588800004084].

Explaining the Inexplicable: An Explainable AI Approach to Boost Knowledge Transfer

Conforti, Pietro;Amerini, Irene;
2026

Abstract

Deep learning has revolutionized performance across diverse applications but is often criticized for its opaque nature and computational complexity, complicating efforts to understand and trust model predictions. Explainability techniques have emerged to address these issues, enhancing model transparency and user trust. Simultaneously, efficiency techniques such as Knowledge Distillation offer solutions to streamline complex (teacher) models into more efficient (student) ones while preserving their predictive capabilities. Building on those research efforts, this paper presents a general framework that is able to improve well-known distillation techniques through explainability methods boosting both model interpretability and accuracy of shallow student architectures. More in detail, by incorporating a novel loss function that aligns the explainability maps of teacher and student models, our method refines the distillation process, leading to more accurate and interpretable predictions. Our approach not only improves the performance of distilled models but also demonstrates the effective integration of explainable insights into distillation frameworks, as validated by multiple experiments on a publicly available benchmark dataset.
2026
International Conference on Computer Vision Theory and Applications (VISAPP 2026)
Computer Vision, Deep Learning, Explainable AI, Knowledge Distillation, Artificial Intelligence.
04 Pubblicazione in atti di convegno::04b Atto di convegno in volume
Explaining the Inexplicable: An Explainable AI Approach to Boost Knowledge Transfer / Conforti, Pietro; Papa, Lorenzo; Amerini, Irene; Russo, Paolo. - (2026), pp. 660-670. ( International Conference on Computer Vision Theory and Applications (VISAPP 2026) Marbella, Spain ) [10.5220/0014588800004084].
File allegati a questo prodotto
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11573/1765786
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo

Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus ND
  • ???jsp.display-item.citation.isi??? ND
social impact