The remarkable performance achieved by Large Language Models (LLM) has driven research efforts to leverage them for a wide range of tasks and input modalities. In speech-to-text (S2T) tasks, the emerging solution consists of projecting the output of the encoder of a Speech Foundational Model (SFM) into the LLM embedding space through an adapter module. However, no work has yet investigated how much the downstream-task performance depends on each component (SFM, adapter, LLM) nor whether the best design of the adapter depends on the chosen SFM and LLM. To fill this gap, we evaluate the combination of 5 adapter modules, 2 LLMs (Mistral and Llama), and 2 SFMs (Whisper and SeamlessM4T) on two widespread S2T tasks, namely Automatic Speech Recognition and Speech Translation. Our results demonstrate that the SFM plays a pivotal role in downstream performance, while the adapter choice has moderate impact and depends on the SFM and LLM.

How to connect speech foundation models and large language models? What matters and mhat does not / Verdini, F.; Melucci, P.; Perna, S.; Cariaggi, F.; Gaido, M.; Papi, S.; Mazurek, S.; Kasztelnik, M.; Bentivogli, L.; Bratieres, S.; Merialdo, P.; Scardapane, S.. - (2025), pp. 1813-1817. ( 26th Interspeech Conference 2025 Rotterdam Ahoy Convention Centre, Ahoyweg 10, nld ) [10.21437/Interspeech.2025-2245].

How to connect speech foundation models and large language models? What matters and mhat does not

Verdini F.;Melucci P.;Scardapane S.
2025

Abstract

The remarkable performance achieved by Large Language Models (LLM) has driven research efforts to leverage them for a wide range of tasks and input modalities. In speech-to-text (S2T) tasks, the emerging solution consists of projecting the output of the encoder of a Speech Foundational Model (SFM) into the LLM embedding space through an adapter module. However, no work has yet investigated how much the downstream-task performance depends on each component (SFM, adapter, LLM) nor whether the best design of the adapter depends on the chosen SFM and LLM. To fill this gap, we evaluate the combination of 5 adapter modules, 2 LLMs (Mistral and Llama), and 2 SFMs (Whisper and SeamlessM4T) on two widespread S2T tasks, namely Automatic Speech Recognition and Speech Translation. Our results demonstrate that the SFM plays a pivotal role in downstream performance, while the adapter choice has moderate impact and depends on the SFM and LLM.
2025
26th Interspeech Conference 2025
adapters; automatic speech recognition; foundation models; LLM; speech translation
04 Pubblicazione in atti di convegno::04b Atto di convegno in volume
How to connect speech foundation models and large language models? What matters and mhat does not / Verdini, F.; Melucci, P.; Perna, S.; Cariaggi, F.; Gaido, M.; Papi, S.; Mazurek, S.; Kasztelnik, M.; Bentivogli, L.; Bratieres, S.; Merialdo, P.; Scardapane, S.. - (2025), pp. 1813-1817. ( 26th Interspeech Conference 2025 Rotterdam Ahoy Convention Centre, Ahoyweg 10, nld ) [10.21437/Interspeech.2025-2245].
File allegati a questo prodotto
File Dimensione Formato  
verdini_How-to-connect_2025.pdf

accesso aperto

Tipologia: Versione editoriale (versione pubblicata con il layout dell'editore)
Licenza: Creative commons
Dimensione 456.44 kB
Formato Adobe PDF
456.44 kB Adobe PDF

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11573/1754552
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 0
  • ???jsp.display-item.citation.isi??? ND
social impact