Neural networks have driven significant advancements across various tasks and domains. However, they are often seen as black boxes due to their complex nature. This aspect hinders the understanding of the reasoning behind their predictions, posing serious challenges for their applications. This thesis addresses these challenges by contributing to the field of eXplainable AI, with a specific focus on improving the interpretability of deep neural networks. The proposed solution takes a topology-based approach, either by learning interpretable representations with self-explainable neural networks or by analyzing the organization of the models' inner layers. The core contributions involve introducing novel techniques that enhance the interpretability of these networks by leveraging an analysis of their internal workings. Specifically, the contributions are threefold. First, we introduce novel neural network architectures that integrate the use of prototypes to enhance model interpretability across several domains like graphs and reinforcement learning. Second, this study delves into the application of logic for interpreting neural network behaviors. Finally, this research explores the study of neural network representations in different scientific fields, such as chemistry, biology, and linguistics. In summary, this thesis advances the field of Explainable AI by proposing topology-based techniques to enhance the interpretability of deep neural networks. By applying these techniques to various applications, the thesis aims to encourage the collaboration between AI research and other scientific fields.

Topology-based explanations for neural networks / Ragno, Alessio. - (2025 Jan 24).

Topology-based explanations for neural networks

RAGNO, ALESSIO
24/01/2025

Abstract

Neural networks have driven significant advancements across various tasks and domains. However, they are often seen as black boxes due to their complex nature. This aspect hinders the understanding of the reasoning behind their predictions, posing serious challenges for their applications. This thesis addresses these challenges by contributing to the field of eXplainable AI, with a specific focus on improving the interpretability of deep neural networks. The proposed solution takes a topology-based approach, either by learning interpretable representations with self-explainable neural networks or by analyzing the organization of the models' inner layers. The core contributions involve introducing novel techniques that enhance the interpretability of these networks by leveraging an analysis of their internal workings. Specifically, the contributions are threefold. First, we introduce novel neural network architectures that integrate the use of prototypes to enhance model interpretability across several domains like graphs and reinforcement learning. Second, this study delves into the application of logic for interpreting neural network behaviors. Finally, this research explores the study of neural network representations in different scientific fields, such as chemistry, biology, and linguistics. In summary, this thesis advances the field of Explainable AI by proposing topology-based techniques to enhance the interpretability of deep neural networks. By applying these techniques to various applications, the thesis aims to encourage the collaboration between AI research and other scientific fields.
24-gen-2025
File allegati a questo prodotto
File Dimensione Formato  
Tesi_dottorato_Ragno.pdf

accesso aperto

Note: tesi completa
Tipologia: Tesi di dottorato
Licenza: Creative commons
Dimensione 18.23 MB
Formato Adobe PDF
18.23 MB Adobe PDF

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11573/1733678
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus ND
  • ???jsp.display-item.citation.isi??? ND
social impact