Molecular properties' prediction is a fundamental task in the field of drug discovery. Several works focus on using graph neural networks as they allow to directly use molecular graph representations. Although they have been successfully applied in a variety of applications, these models lack in the transparency of the decision process. In this work, we adapt the concept whitening explainability method to graph neural networks. This approach allows building an inherently interpretable model, by aligning the axes of the latent space with known concepts of interest, thus providing a straightforward way of extracting them. We test the models on the BBBP dataset from MoleculeNet. Starting from previous works, we identify the most significant molecular properties to be used as concepts to perform the classification. We show that the addition of concept whitening layers brings an improvement in both classification performance and interpretability. Finally, we show how to obtain both structural and conceptual explanations for the predictions.

Explainable AI in drug design: self-interpretable graph neural network for molecular property prediction using concept whitening / Proietti, Michela; Ragno, Alessio; Capobianco, Roberto. - (2022). (Intervento presentato al convegno 3rd Molecules Medicinal Chemistry Symposium: Shaping Medicinal Chemistry for the New Decade tenutosi a Rome; Italy).

Explainable AI in drug design: self-interpretable graph neural network for molecular property prediction using concept whitening

Michela Proietti
;
Alessio Ragno;Roberto Capobianco
2022

Abstract

Molecular properties' prediction is a fundamental task in the field of drug discovery. Several works focus on using graph neural networks as they allow to directly use molecular graph representations. Although they have been successfully applied in a variety of applications, these models lack in the transparency of the decision process. In this work, we adapt the concept whitening explainability method to graph neural networks. This approach allows building an inherently interpretable model, by aligning the axes of the latent space with known concepts of interest, thus providing a straightforward way of extracting them. We test the models on the BBBP dataset from MoleculeNet. Starting from previous works, we identify the most significant molecular properties to be used as concepts to perform the classification. We show that the addition of concept whitening layers brings an improvement in both classification performance and interpretability. Finally, we show how to obtain both structural and conceptual explanations for the predictions.
2022
File allegati a questo prodotto
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11573/1680634
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo

Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus ND
  • ???jsp.display-item.citation.isi??? ND
social impact