We analyze how large language models (LLMs) represent out-of-context words, investigating their reliance on the given context to capture their semantics. Our likelihood-guided text perturbations reveal a correlation between token likelihood and attention values in transformer-based language models. Extensive experiments reveal that unexpected tokens cause the model to attend less to the information coming from themselves to compute their representations, particularly at higher layers. These findings have valuable implications for assessing the robustness of LLMs in real-world scenarios. Fully reproducible codebase at https://github.com/Flegyas/AttentionLikelihood.

Attention-likelihood relationship in transformers / Ruscio, Valeria; Maiorca, Valentino; Silvestri, Fabrizio. - (2023). (Intervento presentato al convegno The Eleventh International Conference on Learning Representations tenutosi a Kigali).

Attention-likelihood relationship in transformers

Valeria Ruscio
;
Valentino Maiorca
;
Fabrizio Silvestri
2023

Abstract

We analyze how large language models (LLMs) represent out-of-context words, investigating their reliance on the given context to capture their semantics. Our likelihood-guided text perturbations reveal a correlation between token likelihood and attention values in transformer-based language models. Extensive experiments reveal that unexpected tokens cause the model to attend less to the information coming from themselves to compute their representations, particularly at higher layers. These findings have valuable implications for assessing the robustness of LLMs in real-world scenarios. Fully reproducible codebase at https://github.com/Flegyas/AttentionLikelihood.
2023
The Eleventh International Conference on Learning Representations
Attention; Transformers; Large Language Models; Out-Of-Context
04 Pubblicazione in atti di convegno::04b Atto di convegno in volume
Attention-likelihood relationship in transformers / Ruscio, Valeria; Maiorca, Valentino; Silvestri, Fabrizio. - (2023). (Intervento presentato al convegno The Eleventh International Conference on Learning Representations tenutosi a Kigali).
File allegati a questo prodotto
File Dimensione Formato  
Ruscio_attention_likelihood_2023.pdf

accesso aperto

Tipologia: Versione editoriale (versione pubblicata con il layout dell'editore)
Licenza: Creative commons
Dimensione 268.5 kB
Formato Adobe PDF
268.5 kB Adobe PDF

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11573/1696195
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus ND
  • ???jsp.display-item.citation.isi??? ND
social impact