Current Large Language Models (LLMs) have shown strong reasoning capabilities in commonsense question answering benchmarks, but the process underlying their success remains largely opaque. As a consequence, recent approaches have equipped LLMs with mechanisms for knowledge retrieval, reasoning and introspection, not only to improve their capabilities but also to enhance the interpretability of their outputs. However, these methods require additional training, hand-crafted templates or human-written explanations. To address these issues, we introduce ZEBRA, a zero-shot question answering framework that combines retrieval, case-based reasoning and introspection and dispenses with the need for additional training of the LLM. Given an input question, ZEBRA retrieves relevant question-knowledge pairs from a knowledge base and generates new knowledge by reasoning over the relationships in these pairs. This generated knowledge is then used to answer the input question, improving the model’s performance and interpretability. We evaluate our approach across 8 well-established commonsense reasoning benchmarks, demonstrating that ZEBRA consistently outperforms strong LLMs and previous knowledge integration approaches, achieving an average accuracy improvement of up to 4.5 points.

ZEBRA: Zero-Shot Example-Based Retrieval Augmentation for Commonsense Question Answering / Molfese, Francesco Maria; Conia, Simone; Orlando, Riccardo; Navigli, Roberto. - (2024), pp. 22429-22444. (Intervento presentato al convegno Empirical Methods in Natural Language Processing tenutosi a Miami; United States) [10.18653/v1/2024.emnlp-main.1251].

ZEBRA: Zero-Shot Example-Based Retrieval Augmentation for Commonsense Question Answering

Molfese, Francesco Maria
Co-primo
;
Conia, Simone
Co-primo
;
Orlando, Riccardo;Navigli, Roberto
Ultimo
2024

Abstract

Current Large Language Models (LLMs) have shown strong reasoning capabilities in commonsense question answering benchmarks, but the process underlying their success remains largely opaque. As a consequence, recent approaches have equipped LLMs with mechanisms for knowledge retrieval, reasoning and introspection, not only to improve their capabilities but also to enhance the interpretability of their outputs. However, these methods require additional training, hand-crafted templates or human-written explanations. To address these issues, we introduce ZEBRA, a zero-shot question answering framework that combines retrieval, case-based reasoning and introspection and dispenses with the need for additional training of the LLM. Given an input question, ZEBRA retrieves relevant question-knowledge pairs from a knowledge base and generates new knowledge by reasoning over the relationships in these pairs. This generated knowledge is then used to answer the input question, improving the model’s performance and interpretability. We evaluate our approach across 8 well-established commonsense reasoning benchmarks, demonstrating that ZEBRA consistently outperforms strong LLMs and previous knowledge integration approaches, achieving an average accuracy improvement of up to 4.5 points.
2024
Empirical Methods in Natural Language Processing
large language models; commonsense reasoning; question answering; natural language processing
04 Pubblicazione in atti di convegno::04b Atto di convegno in volume
ZEBRA: Zero-Shot Example-Based Retrieval Augmentation for Commonsense Question Answering / Molfese, Francesco Maria; Conia, Simone; Orlando, Riccardo; Navigli, Roberto. - (2024), pp. 22429-22444. (Intervento presentato al convegno Empirical Methods in Natural Language Processing tenutosi a Miami; United States) [10.18653/v1/2024.emnlp-main.1251].
File allegati a questo prodotto
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11573/1727951
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo

Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus ND
  • ???jsp.display-item.citation.isi??? ND
social impact