Neural models have shown impressive performance gains in answering queries from natural language text. However, existing works are unable to support database queries, such as "List/Count all female athletes who were born in 20th century", which require reasoning over sets of relevant facts with operations such as join, filtering and aggregation. We show that while state-of-the-art transformer models perform very well for small databases, they exhibit limitations in processing noisy data, numerical operations, and queries that aggregate facts. We propose a modular architecture to answer these database-style queries over multiple spans from text and aggregating these at scale. We evaluate the architecture using WikiNLDB, a novel dataset for exploring such queries. Our architecture scales to databases containing thousands of facts whereas contemporary models are limited by how many facts can be encoded. In direct comparison on small databases, our approach increases overall answer accuracy from 85% to 90%. On larger databases, our approach retains its accuracy whereas transformer baselines could not encode the context.

Database reasoning over text / Thorne, James; Yazdani, Majid; Saeidi, Marzieh; Silvestri, Fabrizio; Riedel, Sebastian; Halevy, Alon Y.. - (2021), pp. 3091-3104. (Intervento presentato al convegno Association for Computational Linguistics tenutosi a Online) [10.18653/v1/2021.acl-long.241].

Database reasoning over text

Fabrizio Silvestri
;
2021

Abstract

Neural models have shown impressive performance gains in answering queries from natural language text. However, existing works are unable to support database queries, such as "List/Count all female athletes who were born in 20th century", which require reasoning over sets of relevant facts with operations such as join, filtering and aggregation. We show that while state-of-the-art transformer models perform very well for small databases, they exhibit limitations in processing noisy data, numerical operations, and queries that aggregate facts. We propose a modular architecture to answer these database-style queries over multiple spans from text and aggregating these at scale. We evaluate the architecture using WikiNLDB, a novel dataset for exploring such queries. Our architecture scales to databases containing thousands of facts whereas contemporary models are limited by how many facts can be encoded. In direct comparison on small databases, our approach increases overall answer accuracy from 85% to 90%. On larger databases, our approach retains its accuracy whereas transformer baselines could not encode the context.
2021
Association for Computational Linguistics
NeuralDB; T5; Transformer
04 Pubblicazione in atti di convegno::04b Atto di convegno in volume
Database reasoning over text / Thorne, James; Yazdani, Majid; Saeidi, Marzieh; Silvestri, Fabrizio; Riedel, Sebastian; Halevy, Alon Y.. - (2021), pp. 3091-3104. (Intervento presentato al convegno Association for Computational Linguistics tenutosi a Online) [10.18653/v1/2021.acl-long.241].
File allegati a questo prodotto
File Dimensione Formato  
Thorne_Database_2021.pdf

accesso aperto

Note: DOI: 10.18653/v1/2021.acl-long.241
Tipologia: Versione editoriale (versione pubblicata con il layout dell'editore)
Licenza: Creative commons
Dimensione 987.78 kB
Formato Adobe PDF
987.78 kB Adobe PDF

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11573/1573109
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 19
  • ???jsp.display-item.citation.isi??? 7
social impact