Deep learning has been extensively utilized in the domains of bioinformatics and chemoinformatics, yielding compelling results. However, neural networks have predominantly been regarded as black boxes, characterized by internal mechanisms that hinder interpretability due to the highly nonlinear functions they learn. In the biomedical field, this lack of interpretability is undesirable, as it is imperative for scientists to comprehend the reasons behind the occurrence of specific diseases or the molecular properties that make a compound effective against a particular target protein. Consequently, the inherent closure of those models keeps their results far from being trusted. To address this issue and make deep learning suitable for bioinformatics and chemoinformatics tasks, there is the urge to develop techniques for explainable artificial intelligence (XAI). These techniques should be capable of measuring the significance of input features for predictions or determining the strength of their interactions. The ability to provide explanations must be integrated into the biomedical deep learning pipeline, which utilizes available data sources to uncover new insights regarding potentially disease-associated genes, thereby facilitating the repurposing and development of new drugs. In line with this objective, this thesis focuses on the development of innovative explainability techniques for neural networks and demonstrates their effective applications in bioinformatics and medicinal chemistry. The devised models find their place in the pipeline, wherein each component of the protocol generates effective and explainable results. These results span from the discovery of disease genes to the repurposing and development of drugs. However, deep learning lives in synergy with classical machine learning models and network-based algorithms, which remain relevant in this field and, therefore, hold a place within this thesis. Moreover, they offer the basis for proper training of deep learning models and pave the way for the development of XAI techniques for neural networks. The proposed work demonstrates how XAI can benefit biomedicine, proving deep learning to be a powerful tool to solve biomedical problems and that the obtained results can be explained. This contributes to the delivery of not only accurate but also trustworthy results, fulfilling the need for explainability of medical doctors, geneticists, and scientists in the life sciences and leading toward a fully explainable biomedical deep learning pipeline.

Toward explainable biomedical deep learning / Mastropietro, Andrea. - (2024 Jan 31).

Toward explainable biomedical deep learning

MASTROPIETRO, ANDREA
31/01/2024

Abstract

Deep learning has been extensively utilized in the domains of bioinformatics and chemoinformatics, yielding compelling results. However, neural networks have predominantly been regarded as black boxes, characterized by internal mechanisms that hinder interpretability due to the highly nonlinear functions they learn. In the biomedical field, this lack of interpretability is undesirable, as it is imperative for scientists to comprehend the reasons behind the occurrence of specific diseases or the molecular properties that make a compound effective against a particular target protein. Consequently, the inherent closure of those models keeps their results far from being trusted. To address this issue and make deep learning suitable for bioinformatics and chemoinformatics tasks, there is the urge to develop techniques for explainable artificial intelligence (XAI). These techniques should be capable of measuring the significance of input features for predictions or determining the strength of their interactions. The ability to provide explanations must be integrated into the biomedical deep learning pipeline, which utilizes available data sources to uncover new insights regarding potentially disease-associated genes, thereby facilitating the repurposing and development of new drugs. In line with this objective, this thesis focuses on the development of innovative explainability techniques for neural networks and demonstrates their effective applications in bioinformatics and medicinal chemistry. The devised models find their place in the pipeline, wherein each component of the protocol generates effective and explainable results. These results span from the discovery of disease genes to the repurposing and development of drugs. However, deep learning lives in synergy with classical machine learning models and network-based algorithms, which remain relevant in this field and, therefore, hold a place within this thesis. Moreover, they offer the basis for proper training of deep learning models and pave the way for the development of XAI techniques for neural networks. The proposed work demonstrates how XAI can benefit biomedicine, proving deep learning to be a powerful tool to solve biomedical problems and that the obtained results can be explained. This contributes to the delivery of not only accurate but also trustworthy results, fulfilling the need for explainability of medical doctors, geneticists, and scientists in the life sciences and leading toward a fully explainable biomedical deep learning pipeline.
31-gen-2024
File allegati a questo prodotto
File Dimensione Formato  
Tesi_dottorato_Mastropietro.pdf

accesso aperto

Note: Tesi di Dottorato
Tipologia: Tesi di dottorato
Licenza: Tutti i diritti riservati (All rights reserved)
Dimensione 8.98 MB
Formato Adobe PDF
8.98 MB Adobe PDF

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11573/1710015
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus ND
  • ???jsp.display-item.citation.isi??? ND
social impact