Graph-based semantic parsing aims to represent textual meaning through directed graphs. As one of the most promising general-purpose meaning representations, these structures and their parsing have gained a significant interest momentum during recent years, with several diverse formalisms being proposed. Yet, owing to this very heterogeneity, most of the research effort has focused mainly on solutions specific to a given formalism. In this work, instead, we reframe semantic parsing towards multiple formalisms as Multilingual Neural Machine Translation (MNMT), and propose SGL, a many-to-many seq2seq architecture trained with an MNMT objective. Backed by several experiments, we show that this framework is indeed effective once the learning procedure is enhanced with large parallel corpora coming from Machine Translation: we report competitive performances on AMR and UCCA parsing, especially once paired with pre-trained architectures. Furthermore, we find that models trained under this configuration scale remarkably well to tasks such as cross-lingual AMR parsing: SGL outperforms all its competitors by a large margin without even explicitly seeing non-English to AMR examples at training time and, once these examples are included as well, sets an unprecedented state of the art in this task. We release our code and our models for research purposes at https://github.com/SapienzaNLP/sgl.

SGL: Speaking the Graph Languages of Semantic Parsing via Multilingual Translation / Procopio, Luigi; Tripodi, Rocco; Navigli, Roberto. - (2021), pp. 325-337. (Intervento presentato al convegno 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies tenutosi a Online) [10.18653/v1/2021.naacl-main.30].

SGL: Speaking the Graph Languages of Semantic Parsing via Multilingual Translation

Procopio, Luigi
;
Tripodi, Rocco
;
Navigli, Roberto
2021

Abstract

Graph-based semantic parsing aims to represent textual meaning through directed graphs. As one of the most promising general-purpose meaning representations, these structures and their parsing have gained a significant interest momentum during recent years, with several diverse formalisms being proposed. Yet, owing to this very heterogeneity, most of the research effort has focused mainly on solutions specific to a given formalism. In this work, instead, we reframe semantic parsing towards multiple formalisms as Multilingual Neural Machine Translation (MNMT), and propose SGL, a many-to-many seq2seq architecture trained with an MNMT objective. Backed by several experiments, we show that this framework is indeed effective once the learning procedure is enhanced with large parallel corpora coming from Machine Translation: we report competitive performances on AMR and UCCA parsing, especially once paired with pre-trained architectures. Furthermore, we find that models trained under this configuration scale remarkably well to tasks such as cross-lingual AMR parsing: SGL outperforms all its competitors by a large margin without even explicitly seeing non-English to AMR examples at training time and, once these examples are included as well, sets an unprecedented state of the art in this task. We release our code and our models for research purposes at https://github.com/SapienzaNLP/sgl.
2021
2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
Natural Language Processing; NLP; sequence-to-sequence; Semantic Parsing;
04 Pubblicazione in atti di convegno::04b Atto di convegno in volume
SGL: Speaking the Graph Languages of Semantic Parsing via Multilingual Translation / Procopio, Luigi; Tripodi, Rocco; Navigli, Roberto. - (2021), pp. 325-337. (Intervento presentato al convegno 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies tenutosi a Online) [10.18653/v1/2021.naacl-main.30].
File allegati a questo prodotto
File Dimensione Formato  
Procopio_SGL_2021.pdf

accesso aperto

Tipologia: Versione editoriale (versione pubblicata con il layout dell'editore)
Licenza: Creative commons
Dimensione 731.37 kB
Formato Adobe PDF
731.37 kB Adobe PDF

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11573/1605363
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 20
  • ???jsp.display-item.citation.isi??? 9
social impact