The recent success of deep learning models in solving complex problems and in different domains has increased interest in understanding what they learn. Therefore, different approaches have been employed to explain these models, one of which uses human-understandable concepts as explanations. Two examples of methods that use this approach are Network Dissection and Compositional explanations. The former explains units using atomic concepts, while the latter makes explanations more expressive, replacing atomic concepts with logical forms. While intuitively, logical forms are more informative than atomic concepts, it is not clear how to quantify this improvement, and their evaluation is often based on the same metric that is optimized during the search process and on the usage of hyper-parameters to be tuned. In this paper, we propose to use as an evaluation metric the Detection Accuracy, which measures units’ consistency of detection of their assigned explanations. We show that this metric (1) evaluates explanations of different lengths effectively, (2) can be used as a stopping criterion for the compositional explanation search, eliminating the explanation length hyper-parameter, and (3) exposes new specialized units whose length 1 explanations are the perceptual abstractions of their longer explanations.

Detection accuracy for evaluating compositional explanations of units / Makinwa, S. M.; La Rosa, B.; Capobianco, R.. - 13196 LNAI:(2022), pp. 550-563. (Intervento presentato al convegno 20th International conference of the italian association for artificial intelligence, AIxIA 2021 tenutosi a Virtual Event) [10.1007/978-3-031-08421-8_38].

Detection accuracy for evaluating compositional explanations of units

La Rosa B.
Secondo
Supervision
;
Capobianco R.
Ultimo
Supervision
2022

Abstract

The recent success of deep learning models in solving complex problems and in different domains has increased interest in understanding what they learn. Therefore, different approaches have been employed to explain these models, one of which uses human-understandable concepts as explanations. Two examples of methods that use this approach are Network Dissection and Compositional explanations. The former explains units using atomic concepts, while the latter makes explanations more expressive, replacing atomic concepts with logical forms. While intuitively, logical forms are more informative than atomic concepts, it is not clear how to quantify this improvement, and their evaluation is often based on the same metric that is optimized during the search process and on the usage of hyper-parameters to be tuned. In this paper, we propose to use as an evaluation metric the Detection Accuracy, which measures units’ consistency of detection of their assigned explanations. We show that this metric (1) evaluates explanations of different lengths effectively, (2) can be used as a stopping criterion for the compositional explanation search, eliminating the explanation length hyper-parameter, and (3) exposes new specialized units whose length 1 explanations are the perceptual abstractions of their longer explanations.
2022
20th International conference of the italian association for artificial intelligence, AIxIA 2021
deep learning; deep neural networks; neurons explanations; explainable deep learning; explainable artificial intelligence
04 Pubblicazione in atti di convegno::04b Atto di convegno in volume
Detection accuracy for evaluating compositional explanations of units / Makinwa, S. M.; La Rosa, B.; Capobianco, R.. - 13196 LNAI:(2022), pp. 550-563. (Intervento presentato al convegno 20th International conference of the italian association for artificial intelligence, AIxIA 2021 tenutosi a Virtual Event) [10.1007/978-3-031-08421-8_38].
File allegati a questo prodotto
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11573/1662243
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo

Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 1
  • ???jsp.display-item.citation.isi??? 1
social impact