Coreference Resolution systems are typically evaluated on benchmarks containing small- to medium-scale documents. When it comes to evaluating long texts, however, existing benchmarks, such as LitBank, remain limited in length and do not adequately assess system capabilities at the book scale, i.e., when co-referring mentions span hundreds of thousands of tokens.To fill this gap, we first put forward a novel automatic pipeline that produces high-quality Coreference Resolution annotations on full narrative texts. Then, we adopt this pipeline to create the first book-scale coreference benchmark, BOOKCOREF, with an average document length of more than 200,000 tokens.We carry out a series of experiments showing the robustness of our automatic procedure and demonstrating the value of our resource, which enables current long-document coreference systems to gain up to +20 CoNLL-F1 points when evaluated on full books.Moreover, we report on the new challenges introduced by this unprecedented book-scale setting, highlighting that current models fail to deliver the same performance they achieve on smaller documents. We release our data and code to encourage research and development of new book-scale Coreference Resolution systems at https://github.com/sapienzanlp/bookcoref.

BOOKCOREF: Coreference Resolution at Book Scale / Martinelli, Giuliano; Bonomo, Tommaso; Huguet Cabot, ‪Pere-Lluís; Navigli, Roberto. - Volume 1: Long Papers:(2025), pp. 24526-24544. (Intervento presentato al convegno Association for Computational Linguistics tenutosi a Vienna; Austria) [10.18653/v1/2025.acl-long.1197].

BOOKCOREF: Coreference Resolution at Book Scale

Giuliano Martinelli
Co-primo
;
Tommaso Bonomo
Co-primo
;
Roberto Navigli
Ultimo
2025

Abstract

Coreference Resolution systems are typically evaluated on benchmarks containing small- to medium-scale documents. When it comes to evaluating long texts, however, existing benchmarks, such as LitBank, remain limited in length and do not adequately assess system capabilities at the book scale, i.e., when co-referring mentions span hundreds of thousands of tokens.To fill this gap, we first put forward a novel automatic pipeline that produces high-quality Coreference Resolution annotations on full narrative texts. Then, we adopt this pipeline to create the first book-scale coreference benchmark, BOOKCOREF, with an average document length of more than 200,000 tokens.We carry out a series of experiments showing the robustness of our automatic procedure and demonstrating the value of our resource, which enables current long-document coreference systems to gain up to +20 CoNLL-F1 points when evaluated on full books.Moreover, we report on the new challenges introduced by this unprecedented book-scale setting, highlighting that current models fail to deliver the same performance they achieve on smaller documents. We release our data and code to encourage research and development of new book-scale Coreference Resolution systems at https://github.com/sapienzanlp/bookcoref.
2025
Association for Computational Linguistics
coreference resolution, information extraction, document-level extraction, corpus creation, benchmarking, long-document, book-scale
04 Pubblicazione in atti di convegno::04b Atto di convegno in volume
BOOKCOREF: Coreference Resolution at Book Scale / Martinelli, Giuliano; Bonomo, Tommaso; Huguet Cabot, ‪Pere-Lluís; Navigli, Roberto. - Volume 1: Long Papers:(2025), pp. 24526-24544. (Intervento presentato al convegno Association for Computational Linguistics tenutosi a Vienna; Austria) [10.18653/v1/2025.acl-long.1197].
File allegati a questo prodotto
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11573/1744986
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo

Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus ND
  • ???jsp.display-item.citation.isi??? ND
social impact