Hate speech annotation for training machine learning models is an inherently ambiguous and subjective task. In this paper, we adopt a perspectivist approach to data annotation, model training and evaluation for hate speech classification. We first focus on the annotation process and argue that it drastically influences the final data quality. We then present three large hate speech datasets that incorporate annotator disagreement and use them to train and evaluate machine learning models. As the main point, we propose to evaluate machine learning models through the lens of disagreement by applying proper performance measures to evaluate both annotators’ agreement and models’ quality. We further argue that annotator agreement poses intrinsic limits to the performance achievable by models. When comparing models and annotators, we observed that they achieve consistent levels of agreement across datasets. We reflect upon our results and propose some methodological and ethical considerations that can stimulate the ongoing discussion on hate speech modelling and classification with disagreement.

Handling Disagreement in Hate Speech Modelling / Kralj Novak, P.; Scantamburlo, T.; Pelicon, A.; Cinelli, M.; Mozetic, I.; Zollo, F.. - 1602 CCIS:(2022), pp. 681-695. (Intervento presentato al convegno 19th International Conference on Information Processing and Management of Uncertainty in Knowledge-Based Systems, IPMU 2022 tenutosi a Milan) [10.1007/978-3-031-08974-9_54].

Handling Disagreement in Hate Speech Modelling

Cinelli M.;
2022

Abstract

Hate speech annotation for training machine learning models is an inherently ambiguous and subjective task. In this paper, we adopt a perspectivist approach to data annotation, model training and evaluation for hate speech classification. We first focus on the annotation process and argue that it drastically influences the final data quality. We then present three large hate speech datasets that incorporate annotator disagreement and use them to train and evaluate machine learning models. As the main point, we propose to evaluate machine learning models through the lens of disagreement by applying proper performance measures to evaluate both annotators’ agreement and models’ quality. We further argue that annotator agreement poses intrinsic limits to the performance achievable by models. When comparing models and annotators, we observed that they achieve consistent levels of agreement across datasets. We reflect upon our results and propose some methodological and ethical considerations that can stimulate the ongoing discussion on hate speech modelling and classification with disagreement.
2022
19th International Conference on Information Processing and Management of Uncertainty in Knowledge-Based Systems, IPMU 2022
hate speech; annotator agreement; diamond standard evaluation
04 Pubblicazione in atti di convegno::04b Atto di convegno in volume
Handling Disagreement in Hate Speech Modelling / Kralj Novak, P.; Scantamburlo, T.; Pelicon, A.; Cinelli, M.; Mozetic, I.; Zollo, F.. - 1602 CCIS:(2022), pp. 681-695. (Intervento presentato al convegno 19th International Conference on Information Processing and Management of Uncertainty in Knowledge-Based Systems, IPMU 2022 tenutosi a Milan) [10.1007/978-3-031-08974-9_54].
File allegati a questo prodotto
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11573/1665265
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo

Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 7
  • ???jsp.display-item.citation.isi??? ND
social impact