A well-known issue with Retrieval Augmented Generation (RAG) is that retrieved passages that are irrelevant to the query sometimes distract the answer-generating LLM, causing it to provide an incorrect response. In this paper, we shed light on this core issue and formulate the distracting effect of a passage w.r.t. a query (and an LLM). We provide a quantifiable measure of the distracting effect of a passage and demonstrate its robustness across LLMs. Our research introduces novel methods for identifying and using hard distracting passages to improve RAG systems. By fine-tuning LLMs with these carefully selected distracting passages, we achieve up to a 7.5% increase in answering accuracy compared to counterparts fine-tuned on conventional RAG datasets. Our contribution is two-fold: first, we move beyond the simple binary classification of irrelevant passages as either completely unrelated vs. distracting, and second, we develop and analyze multiple methods for finding hard distracting passages. To our knowledge, no other research has provided such a comprehensive framework for identifying and utilizing hard distracting passages.

The Distracting Effect: Understanding Irrelevant Passages in RAG / Amiraz, Chen; Cuconasu, Florin; Filice, Simone; Karnin, Zohar. - (2025), pp. 18228-18258. ( 63rd Annual Meeting of the Association for Computational Linguistics Vienna; Austria ) [10.18653/v1/2025.acl-long.892].

The Distracting Effect: Understanding Irrelevant Passages in RAG

Cuconasu, Florin
Secondo
;
2025

Abstract

A well-known issue with Retrieval Augmented Generation (RAG) is that retrieved passages that are irrelevant to the query sometimes distract the answer-generating LLM, causing it to provide an incorrect response. In this paper, we shed light on this core issue and formulate the distracting effect of a passage w.r.t. a query (and an LLM). We provide a quantifiable measure of the distracting effect of a passage and demonstrate its robustness across LLMs. Our research introduces novel methods for identifying and using hard distracting passages to improve RAG systems. By fine-tuning LLMs with these carefully selected distracting passages, we achieve up to a 7.5% increase in answering accuracy compared to counterparts fine-tuned on conventional RAG datasets. Our contribution is two-fold: first, we move beyond the simple binary classification of irrelevant passages as either completely unrelated vs. distracting, and second, we develop and analyze multiple methods for finding hard distracting passages. To our knowledge, no other research has provided such a comprehensive framework for identifying and utilizing hard distracting passages.
2025
63rd Annual Meeting of the Association for Computational Linguistics
RAG; LLM; Information Retrieval
04 Pubblicazione in atti di convegno::04b Atto di convegno in volume
The Distracting Effect: Understanding Irrelevant Passages in RAG / Amiraz, Chen; Cuconasu, Florin; Filice, Simone; Karnin, Zohar. - (2025), pp. 18228-18258. ( 63rd Annual Meeting of the Association for Computational Linguistics Vienna; Austria ) [10.18653/v1/2025.acl-long.892].
File allegati a questo prodotto
File Dimensione Formato  
Amiraz_The-Distracting-Effect_2025.pdf

accesso aperto

Note: DOI: 10.18653/v1/2025.acl-long.892
Tipologia: Versione editoriale (versione pubblicata con il layout dell'editore)
Licenza: Creative commons
Dimensione 1.21 MB
Formato Adobe PDF
1.21 MB Adobe PDF

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11573/1744062
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus ND
  • ???jsp.display-item.citation.isi??? 1
social impact