In the last five years, there has been a significant focus in Natural Language Processing (NLP) on developing larger Pretrained Language Models (PLMs) and introducing benchmarks such as SuperGLUE and SQuAD to measure their abilities in language understanding, reasoning, and reading comprehension. These PLMs have achieved impressive results on these benchmarks, even surpassing human performance in some cases. This has led to claims of superhuman capabilities and the provocative idea that certain tasks have been solved. In this position paper, we take a critical look at these claims and ask whether PLMs truly have superhuman abilities and what the current benchmarks are really evaluating. We show that these benchmarks have serious limitations affecting the comparison between humans and PLMs and provide recommendations for fairer and more transparent benchmarks.
What's the Meaning of Superhuman Performance in Today's NLU? / Tedeschi, Simone; Bos, Johan; Declerck, Thierry; Hajič, Jan; Hershcovich, Daniel; Hovy, Eduard; Koller, Alexander; Krek, Simon; Schockaert, Steven; Sennrich, Rico; Shutova, Ekaterina; Navigli, Roberto. - 1:(2023), pp. 12471-12491. (Intervento presentato al convegno Association for Computational Linguistics tenutosi a Toronto, Canada) [10.18653/v1/2023.acl-long.697].
What's the Meaning of Superhuman Performance in Today's NLU?
Simone Tedeschi
;Thierry Declerck;Eduard Hovy;Alexander Koller;Simon Krek;Steven Schockaert;Rico Sennrich;Ekaterina Shutova;Roberto Navigli
2023
Abstract
In the last five years, there has been a significant focus in Natural Language Processing (NLP) on developing larger Pretrained Language Models (PLMs) and introducing benchmarks such as SuperGLUE and SQuAD to measure their abilities in language understanding, reasoning, and reading comprehension. These PLMs have achieved impressive results on these benchmarks, even surpassing human performance in some cases. This has led to claims of superhuman capabilities and the provocative idea that certain tasks have been solved. In this position paper, we take a critical look at these claims and ask whether PLMs truly have superhuman abilities and what the current benchmarks are really evaluating. We show that these benchmarks have serious limitations affecting the comparison between humans and PLMs and provide recommendations for fairer and more transparent benchmarks.File | Dimensione | Formato | |
---|---|---|---|
Tedeschi_Whats-the-Meaning_2023.pdf
accesso aperto
Note: https://aclanthology.org/2023.acl-long.697.pdf
Tipologia:
Versione editoriale (versione pubblicata con il layout dell'editore)
Licenza:
Creative commons
Dimensione
687.89 kB
Formato
Adobe PDF
|
687.89 kB | Adobe PDF |
I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.