We present DEPCC, the largest-to-date linguistically analyzed corpus in English including 365 million documents, composed of 252 billion tokens and 7.5 billion of named entity occurrences in 14.3 billion sentences from a web-scale crawl of the COMMON CRAWL project. The sentences are processed with a dependency parser and with a named entity tagger and contain provenance information, enabling various applications ranging from training syntax-based word embeddings to open information extraction and question answering. We built an index of all sentences and their linguistic meta-data enabling quick search across the corpus. We demonstrate the utility of this corpus on the verb similarity task by showing that a distributional model trained on our corpus yields better results than models trained on smaller corpora, like Wikipedia. This distributional model outperforms the state of art models of verb similarity trained on smaller corpora on the SimVerb3500 dataset.

Building a web-scale dependency-parsed corpus from common crawl / Panchenko, A.; Ruppert, E.; Faralli, S.; Ponzetto, S. P.; Biemann, C.. - (2019), pp. 1816-1823. (Intervento presentato al convegno 11th International Conference on Language Resources and Evaluation, LREC 2018 tenutosi a Phoenix Seagaia Conference Center, jpn).

Building a web-scale dependency-parsed corpus from common crawl

Faralli S.
Co-primo
;
Ponzetto S. P.
Co-primo
;
2019

Abstract

We present DEPCC, the largest-to-date linguistically analyzed corpus in English including 365 million documents, composed of 252 billion tokens and 7.5 billion of named entity occurrences in 14.3 billion sentences from a web-scale crawl of the COMMON CRAWL project. The sentences are processed with a dependency parser and with a named entity tagger and contain provenance information, enabling various applications ranging from training syntax-based word embeddings to open information extraction and question answering. We built an index of all sentences and their linguistic meta-data enabling quick search across the corpus. We demonstrate the utility of this corpus on the verb similarity task by showing that a distributional model trained on our corpus yields better results than models trained on smaller corpora, like Wikipedia. This distributional model outperforms the state of art models of verb similarity trained on smaller corpora on the SimVerb3500 dataset.
2019
11th International Conference on Language Resources and Evaluation, LREC 2018
Common Crawl; Dependency parsing; Distributional semantics; Text corpus; Verb similarity; Web as a corpus
04 Pubblicazione in atti di convegno::04b Atto di convegno in volume
Building a web-scale dependency-parsed corpus from common crawl / Panchenko, A.; Ruppert, E.; Faralli, S.; Ponzetto, S. P.; Biemann, C.. - (2019), pp. 1816-1823. (Intervento presentato al convegno 11th International Conference on Language Resources and Evaluation, LREC 2018 tenutosi a Phoenix Seagaia Conference Center, jpn).
File allegati a questo prodotto
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11573/1621525
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo

Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 10
  • ???jsp.display-item.citation.isi??? 4
social impact