In recent years, the dominance of Large Language Models (LLMs) in the English language has become evident. However, there remains a pronounced gap in resources and evaluation tools tailored for non-English languages, underscoring a significant disparity in the global AI landscape. This paper seeks to bridge this gap, specifically focusing on the Italian linguistic context. We introduce a novel benchmark, and an open LLM Leaderboard, designed to evaluate LLMs’ performance in Italian, providing a rigorous framework for comparative analysis. In our assessment of currently available models, we highlight their respective strengths and limitations against this standard. Crucially, we propose “DanteLLM”, a state-of-the-art LLM dedicated to Italian. Our empirical evaluations underscore Dante’s superiority, as it emerges as the most performant model on our benchmark, with improvements by up to 6 points. This research not only marks a significant stride in Italian-centric natural language processing but also offers a blueprint for the development and evaluation of LLMs in other languages, championing a more inclusive AI paradigm. Our code at: https://github.com/RSTLess-research/DanteLLM

DanteLLM: Let’s Push Italian LLM Research Forward! / Bacciu, Andrea; Campagnano, Cesare; Trappolini, Giovanni; Silvestri, Fabrizio. - (2024), pp. 4343-4355. (Intervento presentato al convegno LREC-COLING tenutosi a Turin, Italy).

DanteLLM: Let’s Push Italian LLM Research Forward!

Andrea Bacciu
Primo
Conceptualization
;
Cesare Campagnano;Giovanni Trappolini;Fabrizio Silvestri
Ultimo
Supervision
2024

Abstract

In recent years, the dominance of Large Language Models (LLMs) in the English language has become evident. However, there remains a pronounced gap in resources and evaluation tools tailored for non-English languages, underscoring a significant disparity in the global AI landscape. This paper seeks to bridge this gap, specifically focusing on the Italian linguistic context. We introduce a novel benchmark, and an open LLM Leaderboard, designed to evaluate LLMs’ performance in Italian, providing a rigorous framework for comparative analysis. In our assessment of currently available models, we highlight their respective strengths and limitations against this standard. Crucially, we propose “DanteLLM”, a state-of-the-art LLM dedicated to Italian. Our empirical evaluations underscore Dante’s superiority, as it emerges as the most performant model on our benchmark, with improvements by up to 6 points. This research not only marks a significant stride in Italian-centric natural language processing but also offers a blueprint for the development and evaluation of LLMs in other languages, championing a more inclusive AI paradigm. Our code at: https://github.com/RSTLess-research/DanteLLM
2024
LREC-COLING
large language models, italian LLM, LLM, LLMs, benchmark, cross-linguality, multilinguality, dantellm, hallucinations
04 Pubblicazione in atti di convegno::04b Atto di convegno in volume
DanteLLM: Let’s Push Italian LLM Research Forward! / Bacciu, Andrea; Campagnano, Cesare; Trappolini, Giovanni; Silvestri, Fabrizio. - (2024), pp. 4343-4355. (Intervento presentato al convegno LREC-COLING tenutosi a Turin, Italy).
File allegati a questo prodotto
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11573/1716988
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo

Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus ND
  • ???jsp.display-item.citation.isi??? ND
social impact