Starting from last year, WMT human evaluation has been performed within the Multidimensional Quality Metrics (MQM) framework, where human annotators are asked to identify error spans in translations, alongside an error category and a severity. In this paper, we describe our submission to the WMT 2022 Metrics Shared Task, where we propose using the same paradigm for automatic evaluation: we present the MATESE metrics, which reframe machine translation evaluation as a sequence tagging problem. Our submission also includes a reference-free metric, denominated MATESE-QE. Despite the paucity of the openly available MQM data, our metrics obtain promising results, showing high levels of correlation with human judgements, while also enabling an evaluation that is interpretable. Moreover, MATESE-QE can also be employed in settings where it is infeasible to curate reference translations manually.

MaTESe: Machine Translation Evaluation as a Sequence Tagging Problem / Perrella, Stefano; Proietti, Lorenzo; Scirã, Alessandro; Campolungo, Niccolò; Navigli, Roberto. - (2022), pp. 569-577. (Intervento presentato al convegno Conference on Machine Translation tenutosi a Abu Dhabi, United Arab Emirates).

MaTESe: Machine Translation Evaluation as a Sequence Tagging Problem

Stefano Perrella
;
Lorenzo Proietti
;
Alessandro ScirÃ
;
Niccolò Campolungo
;
Roberto Navigli
2022

Abstract

Starting from last year, WMT human evaluation has been performed within the Multidimensional Quality Metrics (MQM) framework, where human annotators are asked to identify error spans in translations, alongside an error category and a severity. In this paper, we describe our submission to the WMT 2022 Metrics Shared Task, where we propose using the same paradigm for automatic evaluation: we present the MATESE metrics, which reframe machine translation evaluation as a sequence tagging problem. Our submission also includes a reference-free metric, denominated MATESE-QE. Despite the paucity of the openly available MQM data, our metrics obtain promising results, showing high levels of correlation with human judgements, while also enabling an evaluation that is interpretable. Moreover, MATESE-QE can also be employed in settings where it is infeasible to curate reference translations manually.
2022
Conference on Machine Translation
machine translation; evaluation metrics; quality estimation; sequence tagging
04 Pubblicazione in atti di convegno::04b Atto di convegno in volume
MaTESe: Machine Translation Evaluation as a Sequence Tagging Problem / Perrella, Stefano; Proietti, Lorenzo; Scirã, Alessandro; Campolungo, Niccolò; Navigli, Roberto. - (2022), pp. 569-577. (Intervento presentato al convegno Conference on Machine Translation tenutosi a Abu Dhabi, United Arab Emirates).
File allegati a questo prodotto
File Dimensione Formato  
Perrella_MATESE_2022.pdf

accesso aperto

Note: https://aclanthology.org/2022.wmt-1.51
Tipologia: Versione editoriale (versione pubblicata con il layout dell'editore)
Licenza: Creative commons
Dimensione 725.23 kB
Formato Adobe PDF
725.23 kB Adobe PDF

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11573/1670755
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 18
  • ???jsp.display-item.citation.isi??? ND
social impact