As NLP models are increasingly deployed in socially situated settings such as online abusive content detection, it is crucial to ensure that these models are robust. One way of improving model robustness is to generate counterfactually augmented data (CAD) for training models that can better learn to distinguish between core features and data artifacts. While models trained on this type of data have shown promising out-of-domain generalizability, it is still unclear what the sources of such improvements are. We investigate the benefits of CAD for social NLP models by focusing on three social computing constructs — sentiment, sexism, and hate speech. Assessing the performance of models trained with and without CAD across different types of datasets, we find that while models trained on CAD show lower in-domain performance, they generalize better out-of-domain. We unpack this apparent discrepancy using machine explanations and find that CAD reduces model reliance on spurious features. Leveraging a novel typology of CAD to analyze their relationship with model performance, we find that CAD which acts on the construct directly or a diverse set of CAD leads to higher performance.

How Does Counterfactually Augmented Data Impact Models for Social Computing Constructs? / Sen, Indira; Samory, Mattia; Flöck, Fabian; Wagner, Claudia; Augenstein, Isabelle. - (2021), pp. 325-344. (Intervento presentato al convegno Conference on Empirical Methods in Natural Language Processing tenutosi a Online and Punta Cana, Dominican Republic) [10.18653/v1/2021.emnlp-main.28].

How Does Counterfactually Augmented Data Impact Models for Social Computing Constructs?

Samory, Mattia;
2021

Abstract

As NLP models are increasingly deployed in socially situated settings such as online abusive content detection, it is crucial to ensure that these models are robust. One way of improving model robustness is to generate counterfactually augmented data (CAD) for training models that can better learn to distinguish between core features and data artifacts. While models trained on this type of data have shown promising out-of-domain generalizability, it is still unclear what the sources of such improvements are. We investigate the benefits of CAD for social NLP models by focusing on three social computing constructs — sentiment, sexism, and hate speech. Assessing the performance of models trained with and without CAD across different types of datasets, we find that while models trained on CAD show lower in-domain performance, they generalize better out-of-domain. We unpack this apparent discrepancy using machine explanations and find that CAD reduces model reliance on spurious features. Leveraging a novel typology of CAD to analyze their relationship with model performance, we find that CAD which acts on the construct directly or a diverse set of CAD leads to higher performance.
2021
Conference on Empirical Methods in Natural Language Processing
Social computing; Spurious features; counterfactual; nlp
04 Pubblicazione in atti di convegno::04b Atto di convegno in volume
How Does Counterfactually Augmented Data Impact Models for Social Computing Constructs? / Sen, Indira; Samory, Mattia; Flöck, Fabian; Wagner, Claudia; Augenstein, Isabelle. - (2021), pp. 325-344. (Intervento presentato al convegno Conference on Empirical Methods in Natural Language Processing tenutosi a Online and Punta Cana, Dominican Republic) [10.18653/v1/2021.emnlp-main.28].
File allegati a questo prodotto
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11573/1655746
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo

Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 14
  • ???jsp.display-item.citation.isi??? 0
social impact