The recent explosion of deep learning techniques boosted the application of Artificial Intelligence in a variety of domains, thanks to their high performance. However, performance comes at the cost of interpretability: deep models contain hundreds of nested non-linear operations that make it impossible to keep track of the chain of steps that bring to a given answer. In our recently published paper [1], we propose a method to improve the interpretability of a class of deep models, namely Memory Augmented Neural Networks (MANNs), when dealing with sequential data. Exploiting the capability of MANNs to store and access data in external memory, tracking the process, and connecting this information to the input sequence, our method extracts the most relevant sub-sequences that explain the answer. We evaluate our approach both on a modified T-maze [2, 3] and on the Story Cloze Test [4], obtaining promising results.

A Discussion about Explainable Inference on Sequential Data via Memory-Tracking / La Rosa, B.; Capobianco, R.; Nardi, D.. - 3078:(2022), pp. 33-44. ( 2021 International Conference of the Italian Association for Artificial Intelligence, AIxIA 2021 DP Virtual ).

A Discussion about Explainable Inference on Sequential Data via Memory-Tracking

La Rosa B.
;
Capobianco R.;Nardi D.
2022

Abstract

The recent explosion of deep learning techniques boosted the application of Artificial Intelligence in a variety of domains, thanks to their high performance. However, performance comes at the cost of interpretability: deep models contain hundreds of nested non-linear operations that make it impossible to keep track of the chain of steps that bring to a given answer. In our recently published paper [1], we propose a method to improve the interpretability of a class of deep models, namely Memory Augmented Neural Networks (MANNs), when dealing with sequential data. Exploiting the capability of MANNs to store and access data in external memory, tracking the process, and connecting this information to the input sequence, our method extracts the most relevant sub-sequences that explain the answer. We evaluate our approach both on a modified T-maze [2, 3] and on the Story Cloze Test [4], obtaining promising results.
2022
2021 International Conference of the Italian Association for Artificial Intelligence, AIxIA 2021 DP
Deep learning; Explainable artificial intelligence; Sequential data
04 Pubblicazione in atti di convegno::04b Atto di convegno in volume
A Discussion about Explainable Inference on Sequential Data via Memory-Tracking / La Rosa, B.; Capobianco, R.; Nardi, D.. - 3078:(2022), pp. 33-44. ( 2021 International Conference of the Italian Association for Artificial Intelligence, AIxIA 2021 DP Virtual ).
File allegati a questo prodotto
File Dimensione Formato  
LaRosa_A-Discussion_2021.pdf

accesso aperto

Note: https://ceur-ws.org/Vol-3078/paper-24.pdf
Tipologia: Versione editoriale (versione pubblicata con il layout dell'editore)
Licenza: Creative commons
Dimensione 1.19 MB
Formato Adobe PDF
1.19 MB Adobe PDF

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11573/1624612
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 0
  • ???jsp.display-item.citation.isi??? ND
social impact