Fact verification aims to verify a claim using evidence from a trustworthy knowledge base. To address this challenge, algorithms must produce features for every claim that are both semantically meaningful, and compact enough to find a semantic alignment with the source information. In contrast to previous work, which tackled the alignment problem by learning over annotated corpora of claims and their corresponding labels, we propose SFAVEL (Self-supervised Fact V erification via Language Model Distillation), a novel unsupervised pretraining framework that leverages pre-trained language models to distil self-supervised features into high-quality claim-fact alignments without the need for annotations. This is enabled by a novel contrastive loss function that encourages features to attain high-quality claim and evidence alignments whilst preserving the semantic relationships across the corpora. Notably, we present results that achieve a new state-of-the-art on FB15k-237 (+5.3% Hits@1) and FEVER (+8% accuracy) with linear evaluation.

Unsupervised Pretraining for Fact Verification by Language Model Distillation / Bazaga, A.; Lio, P.; Micklem, G.. - (2024). (Intervento presentato al convegno International Conference on Learning Representations tenutosi a Hybrid, Vienna).

Unsupervised Pretraining for Fact Verification by Language Model Distillation

Lio P.
;
2024

Abstract

Fact verification aims to verify a claim using evidence from a trustworthy knowledge base. To address this challenge, algorithms must produce features for every claim that are both semantically meaningful, and compact enough to find a semantic alignment with the source information. In contrast to previous work, which tackled the alignment problem by learning over annotated corpora of claims and their corresponding labels, we propose SFAVEL (Self-supervised Fact V erification via Language Model Distillation), a novel unsupervised pretraining framework that leverages pre-trained language models to distil self-supervised features into high-quality claim-fact alignments without the need for annotations. This is enabled by a novel contrastive loss function that encourages features to attain high-quality claim and evidence alignments whilst preserving the semantic relationships across the corpora. Notably, we present results that achieve a new state-of-the-art on FB15k-237 (+5.3% Hits@1) and FEVER (+8% accuracy) with linear evaluation.
2024
International Conference on Learning Representations
Alignment; Computational linguistics; Knowledge based systems; Semantics
04 Pubblicazione in atti di convegno::04b Atto di convegno in volume
Unsupervised Pretraining for Fact Verification by Language Model Distillation / Bazaga, A.; Lio, P.; Micklem, G.. - (2024). (Intervento presentato al convegno International Conference on Learning Representations tenutosi a Hybrid, Vienna).
File allegati a questo prodotto
File Dimensione Formato  
Bazaga_Unsupervised-Pretraining_2024.pdf

accesso aperto

Note: https://openreview.net/forum?id=1mjsP8RYAw
Tipologia: Versione editoriale (versione pubblicata con il layout dell'editore)
Licenza: Creative commons
Dimensione 837.17 kB
Formato Adobe PDF
837.17 kB Adobe PDF

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11573/1728976
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 0
  • ???jsp.display-item.citation.isi??? ND
social impact