The spread of Artificial Intelligence methods in many contexts is undeniable. Different models have been proposed and applied to real-world applications in sectors like economy, industry, medicine, healthcare and sports. Nevertheless, the reasons of why such techniques work are not investigated in depth, thus posing questions about explainability, transparency and trust. In this work, we introduce a novel Deep Learning approach for the problem of drug abuse detection. Specifically, we design a Convolutional Neural Network model analyzing lateral-flow tests and discriminating between normal and abnormal assays. Moreover, we provide evidence regarding the attributes that enable our model to address the considered task, aiming to identify which parts of the input exert a significant influence on the network’s output. This understanding is crucial for applying our methodology in real-world scenarios. The results obtained demonstrate the validity of our approach. In particular, the proposed model achieves an excellent accuracy in the classification of the lateral-flow tests and outperforms two state-of-the-art deep networks. Additionally, we provide supporting data for the model’s explainability, ensuring a precise understanding of the relationship between attributes and output, a key factor in comprehending the internal workings of the neural network.

An explainable convolutional neural network for the detection of drug abuse / Tufo, Giulia; Zribi, Meriam; Pagliuca, Paolo; Pitolli, Francesca. - (2024). (Intervento presentato al convegno First Workshop on Explainable Artificial Intelligence for the medical domain - EXPLIMED tenutosi a Santiago de Compostela, Spain).

An explainable convolutional neural network for the detection of drug abuse

Giulia Tufo
;
Meriam Zribi;Francesca Pitolli
2024

Abstract

The spread of Artificial Intelligence methods in many contexts is undeniable. Different models have been proposed and applied to real-world applications in sectors like economy, industry, medicine, healthcare and sports. Nevertheless, the reasons of why such techniques work are not investigated in depth, thus posing questions about explainability, transparency and trust. In this work, we introduce a novel Deep Learning approach for the problem of drug abuse detection. Specifically, we design a Convolutional Neural Network model analyzing lateral-flow tests and discriminating between normal and abnormal assays. Moreover, we provide evidence regarding the attributes that enable our model to address the considered task, aiming to identify which parts of the input exert a significant influence on the network’s output. This understanding is crucial for applying our methodology in real-world scenarios. The results obtained demonstrate the validity of our approach. In particular, the proposed model achieves an excellent accuracy in the classification of the lateral-flow tests and outperforms two state-of-the-art deep networks. Additionally, we provide supporting data for the model’s explainability, ensuring a precise understanding of the relationship between attributes and output, a key factor in comprehending the internal workings of the neural network.
2024
First Workshop on Explainable Artificial Intelligence for the medical domain - EXPLIMED
Drug abuse detection, Lateral-flow tests, Explainability, Convolutional Neural Networks
04 Pubblicazione in atti di convegno::04b Atto di convegno in volume
An explainable convolutional neural network for the detection of drug abuse / Tufo, Giulia; Zribi, Meriam; Pagliuca, Paolo; Pitolli, Francesca. - (2024). (Intervento presentato al convegno First Workshop on Explainable Artificial Intelligence for the medical domain - EXPLIMED tenutosi a Santiago de Compostela, Spain).
File allegati a questo prodotto
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11573/1724742
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo

Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus ND
  • ???jsp.display-item.citation.isi??? ND
social impact