Thanks to the effectiveness and wide availability of modern pretrained language models (PLMs), recently proposed approaches have achieved remarkable results in dependency- and span-based, multilingual and cross-lingual Semantic Role Labeling (SRL). These results have prompted researchers to investigate the inner workings of modern PLMs with the aim of understanding how, where, and to what extent they encode information about SRL. In this paper, we follow this line of research and probe for predicate argument structures in PLMs. Our study shows that PLMs do encode semantic structures directly into the contextualized representation of a predicate, and also provides insights into the correlation between predicate senses and their structures, the degree of transferability between nominal and verbal structures, and how such structures are encoded across languages. Finally, we look at the practical implications of such insights and demonstrate the benefits of embedding predicate argument structure information into an SRL model.

Probing for Predicate Argument Structures in Pretrained Language Models / Conia, Simone; Navigli, Roberto. - (2022), pp. 4622-4632. (Intervento presentato al convegno 60th Annual Meeting of the Association for Computational Linguistics, ACL 2022 tenutosi a Dublin; Ireland) [10.18653/v1/2022.acl-long.316].

Probing for Predicate Argument Structures in Pretrained Language Models

Conia, Simone
Primo
;
Navigli, Roberto
Ultimo
2022

Abstract

Thanks to the effectiveness and wide availability of modern pretrained language models (PLMs), recently proposed approaches have achieved remarkable results in dependency- and span-based, multilingual and cross-lingual Semantic Role Labeling (SRL). These results have prompted researchers to investigate the inner workings of modern PLMs with the aim of understanding how, where, and to what extent they encode information about SRL. In this paper, we follow this line of research and probe for predicate argument structures in PLMs. Our study shows that PLMs do encode semantic structures directly into the contextualized representation of a predicate, and also provides insights into the correlation between predicate senses and their structures, the degree of transferability between nominal and verbal structures, and how such structures are encoded across languages. Finally, we look at the practical implications of such insights and demonstrate the benefits of embedding predicate argument structure information into an SRL model.
2022
60th Annual Meeting of the Association for Computational Linguistics, ACL 2022
language modeling; semantic role labeling; semantics; natural language processing; deep learning; artificial intelligence
04 Pubblicazione in atti di convegno::04b Atto di convegno in volume
Probing for Predicate Argument Structures in Pretrained Language Models / Conia, Simone; Navigli, Roberto. - (2022), pp. 4622-4632. (Intervento presentato al convegno 60th Annual Meeting of the Association for Computational Linguistics, ACL 2022 tenutosi a Dublin; Ireland) [10.18653/v1/2022.acl-long.316].
File allegati a questo prodotto
File Dimensione Formato  
Conia_Probing_2022.pdf

accesso aperto

Note: Link alla pubblicazione: https://aclanthology.org/2022.acl-long.316/
Tipologia: Versione editoriale (versione pubblicata con il layout dell'editore)
Licenza: Creative commons
Dimensione 915.5 kB
Formato Adobe PDF
915.5 kB Adobe PDF

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11573/1639902
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 7
  • ???jsp.display-item.citation.isi??? 2
social impact