Syntactic parsers have dominated natural language understanding for decades. Yet, their syntactic interpretations are losing centrality in downstream tasks due to the success of large-scale textual representation learners. In this paper, we propose KERMIT (Kernel-inspired Encoder with Recursive Mechanism for Interpretable Trees) to embed symbolic syntactic parse trees into artificial neural networks and to visualize how syntax is used in inference. We experimented with KERMIT paired with two state-of-the-art transformer-based universal sentence encoders (BERT and XLNet) and we showed that KERMIT can indeed boost their performance by effectively embedding human-coded universal syntactic representations in neural networks.

KERMIT: Complementing transformer architectures with encoders of explicit syntactic interpretations / Zanzotto, F. M.; Santilli, A.; Ranaldi, L.; Onorati, D.; Tommasino, P.; Fallucchi, F.. - (2020), pp. 256-267. (Intervento presentato al convegno 2020 Conference on Empirical Methods in Natural Language Processing, EMNLP 2020 tenutosi a Punta Cana, Repubblica Dominicana) [10.18653/v1/2020.emnlp-main.18].

KERMIT: Complementing transformer architectures with encoders of explicit syntactic interpretations

Santilli A.;Onorati D.;
2020

Abstract

Syntactic parsers have dominated natural language understanding for decades. Yet, their syntactic interpretations are losing centrality in downstream tasks due to the success of large-scale textual representation learners. In this paper, we propose KERMIT (Kernel-inspired Encoder with Recursive Mechanism for Interpretable Trees) to embed symbolic syntactic parse trees into artificial neural networks and to visualize how syntax is used in inference. We experimented with KERMIT paired with two state-of-the-art transformer-based universal sentence encoders (BERT and XLNet) and we showed that KERMIT can indeed boost their performance by effectively embedding human-coded universal syntactic representations in neural networks.
2020
2020 Conference on Empirical Methods in Natural Language Processing, EMNLP 2020
natural language processing; syntax; transformer models
04 Pubblicazione in atti di convegno::04b Atto di convegno in volume
KERMIT: Complementing transformer architectures with encoders of explicit syntactic interpretations / Zanzotto, F. M.; Santilli, A.; Ranaldi, L.; Onorati, D.; Tommasino, P.; Fallucchi, F.. - (2020), pp. 256-267. (Intervento presentato al convegno 2020 Conference on Empirical Methods in Natural Language Processing, EMNLP 2020 tenutosi a Punta Cana, Repubblica Dominicana) [10.18653/v1/2020.emnlp-main.18].
File allegati a questo prodotto
File Dimensione Formato  
Zanzotto_KERMIT_2020.pdf

accesso aperto

Tipologia: Versione editoriale (versione pubblicata con il layout dell'editore)
Licenza: Creative commons
Dimensione 1.34 MB
Formato Adobe PDF
1.34 MB Adobe PDF

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11573/1643092
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 35
  • ???jsp.display-item.citation.isi??? 9
social impact