Molecular property prediction is a fundamental task in the field of drug discovery. Several works use graph neural networks to leverage molecular graph representations. Although they have been successfully applied in a variety of applications, their decision process is not transparent. In this work, we adapt concept whitening to graph neural networks. This approach is an explainability method used to build an inherently interpretable model, which allows identifying the concepts and consequently the structural parts of the molecules that are relevant for the output predictions. We test popular models on several benchmark datasets from MoleculeNet. Starting from previous work, we identify the most significant molecular properties to be used as concepts to perform classification. We show that the addition of concept whitening layers brings an improvement in both classification performance and interpretability. Finally, we provide several structural and conceptual explanations for the predictions.

Explainable AI in drug discovery: self-interpretable graph neural network for molecular property prediction using concept whitening / Proietti, Michela; Ragno, Alessio; Rosa, Biagio La; Ragno, Rino; Capobianco, Roberto. - In: MACHINE LEARNING. - ISSN 0885-6125. - 113:4(2023), pp. 2013-2044. [10.1007/s10994-023-06369-y]

Explainable AI in drug discovery: self-interpretable graph neural network for molecular property prediction using concept whitening

Proietti, Michela
;
Ragno, Alessio;Rosa, Biagio La;Ragno, Rino;Capobianco, Roberto
2023

Abstract

Molecular property prediction is a fundamental task in the field of drug discovery. Several works use graph neural networks to leverage molecular graph representations. Although they have been successfully applied in a variety of applications, their decision process is not transparent. In this work, we adapt concept whitening to graph neural networks. This approach is an explainability method used to build an inherently interpretable model, which allows identifying the concepts and consequently the structural parts of the molecules that are relevant for the output predictions. We test popular models on several benchmark datasets from MoleculeNet. Starting from previous work, we identify the most significant molecular properties to be used as concepts to perform classification. We show that the addition of concept whitening layers brings an improvement in both classification performance and interpretability. Finally, we provide several structural and conceptual explanations for the predictions.
2023
graph neural netowkrs; explainable artificial intelligence; drug discovery
01 Pubblicazione su rivista::01a Articolo in rivista
Explainable AI in drug discovery: self-interpretable graph neural network for molecular property prediction using concept whitening / Proietti, Michela; Ragno, Alessio; Rosa, Biagio La; Ragno, Rino; Capobianco, Roberto. - In: MACHINE LEARNING. - ISSN 0885-6125. - 113:4(2023), pp. 2013-2044. [10.1007/s10994-023-06369-y]
File allegati a questo prodotto
File Dimensione Formato  
Proietti_Explainable_2024.pdf

accesso aperto

Note: DOI 10.1007/s10994-023-06369-y
Tipologia: Versione editoriale (versione pubblicata con il layout dell'editore)
Licenza: Creative commons
Dimensione 2.03 MB
Formato Adobe PDF
2.03 MB Adobe PDF

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11573/1691190
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 0
  • ???jsp.display-item.citation.isi??? 0
social impact