Large Language Models (LLMs) are becoming increasingly flexible and reliable: the large pre-training phase enables them to capture a large number of real-world linguistic phenomena. However, pre-training on large amounts of data can also cause the representation of harmful biases. In this paper, we propose a method for identifying the presence of gender bias using a list of occupations characterized by a large imbalance between the number of male and female employees.
Investigating Gender Bias in Large Language Models for the Italian Language / Sofia Ruzzetti, Elena; Onorati, Dario; Ranaldi, Leonardo; Venditti, Davide; Massimo Zanzotto, Fabio. - 3596:(2023). ( Italian Conference on Computational Linguistics 2023 Venice; Italy ).
Investigating Gender Bias in Large Language Models for the Italian Language
Dario Onorati
Membro del Collaboration Group
;
2023
Abstract
Large Language Models (LLMs) are becoming increasingly flexible and reliable: the large pre-training phase enables them to capture a large number of real-world linguistic phenomena. However, pre-training on large amounts of data can also cause the representation of harmful biases. In this paper, we propose a method for identifying the presence of gender bias using a list of occupations characterized by a large imbalance between the number of male and female employees.| File | Dimensione | Formato | |
|---|---|---|---|
|
Ruzzetti_Investigating_2023.pdf
accesso aperto
Note: https://ceur-ws.org/Vol-3596/
Tipologia:
Versione editoriale (versione pubblicata con il layout dell'editore)
Licenza:
Creative commons
Dimensione
262.27 kB
Formato
Adobe PDF
|
262.27 kB | Adobe PDF |
I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.


