Entity resolution (ER) aims at matching records that refer to the same real-world entity. Although widely studied for the last 50 years, ER still represents a challenging data management problem, and several recent works have started to investigate the opportunity of applying deep learning (DL) techniques to solve this problem. In this paper, we study the fundamental problem of explainability of the DL solution for ER. Understanding the matching predictions of an ER solution is indeed crucial to assess the trustworthiness of the DL model and to discover its biases. We treat the DL model as a black box classifier and – while previous approaches to provide explanations for DL predictions are agnostic to the classification task – we propose the CERTA approach that is aware of the semantics of the ER problem. Our approach produces both saliency explanations, which associate each attribute with a saliency score, and counterfactual explanations, which provide examples of values that can flip the prediction. CERTA builds on a probabilistic framework that aims at computing the explanations evaluating the outcomes produced by using perturbed copies of the input records. We experimentally evaluate CERTA’s explanations of state-of-the-art ER solutions based on DL models using publicly available datasets, and demonstrate the effectiveness of CERTA over recently proposed methods for this problem.

Effective Explanations for Entity Resolution Models / Teofili, Tommaso; Firmani, Donatella; Koudas, Nick; Martello, Vincenzo; Merialdo, Paolo; Srivastava, Divesh. - (2022), pp. 2709-2721. (Intervento presentato al convegno International Conference on Data Engineering tenutosi a Kuala Lumpur, Malaysia).

Effective Explanations for Entity Resolution Models

Donatella Firmani;
2022

Abstract

Entity resolution (ER) aims at matching records that refer to the same real-world entity. Although widely studied for the last 50 years, ER still represents a challenging data management problem, and several recent works have started to investigate the opportunity of applying deep learning (DL) techniques to solve this problem. In this paper, we study the fundamental problem of explainability of the DL solution for ER. Understanding the matching predictions of an ER solution is indeed crucial to assess the trustworthiness of the DL model and to discover its biases. We treat the DL model as a black box classifier and – while previous approaches to provide explanations for DL predictions are agnostic to the classification task – we propose the CERTA approach that is aware of the semantics of the ER problem. Our approach produces both saliency explanations, which associate each attribute with a saliency score, and counterfactual explanations, which provide examples of values that can flip the prediction. CERTA builds on a probabilistic framework that aims at computing the explanations evaluating the outcomes produced by using perturbed copies of the input records. We experimentally evaluate CERTA’s explanations of state-of-the-art ER solutions based on DL models using publicly available datasets, and demonstrate the effectiveness of CERTA over recently proposed methods for this problem.
2022
International Conference on Data Engineering
entity resolution; explainable AI; data cleaning
04 Pubblicazione in atti di convegno::04b Atto di convegno in volume
Effective Explanations for Entity Resolution Models / Teofili, Tommaso; Firmani, Donatella; Koudas, Nick; Martello, Vincenzo; Merialdo, Paolo; Srivastava, Divesh. - (2022), pp. 2709-2721. (Intervento presentato al convegno International Conference on Data Engineering tenutosi a Kuala Lumpur, Malaysia).
File allegati a questo prodotto
File Dimensione Formato  
Teofili_Effective-explanations_2023.pdf

solo gestori archivio

Note: articolo
Tipologia: Versione editoriale (versione pubblicata con il layout dell'editore)
Licenza: Tutti i diritti riservati (All rights reserved)
Dimensione 546.82 kB
Formato Adobe PDF
546.82 kB Adobe PDF   Contatta l'autore
Teofili_Effective-explanations_copertina-indice_quarta-_2023.pdf

solo gestori archivio

Tipologia: Altro materiale allegato
Licenza: Tutti i diritti riservati (All rights reserved)
Dimensione 5.75 MB
Formato Adobe PDF
5.75 MB Adobe PDF   Contatta l'autore

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11573/1640606
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus ND
  • ???jsp.display-item.citation.isi??? ND
social impact