The increasing integration of Artificial Intelligence (AI) into decision-making processes has raised concerns about the reproduction of gender and ethnic biases within algorithmic systems. While a growing body of research has addressed this issue, the field remains fragmented due to the absence of a unified conceptual framework for bias, fairness, and inclusivity. This lack of definitional clarity hinders the development of effective mitigation strategies and exacerbates epistemological and operational inconsistencies. To address this gap, this study conducts a scoping review of the literature on algorithmic bias, adopting a socio-technical perspective to map existing research and identify critical gaps. By systematically analyzing a broad range of scholarly contributions, the review explores the conceptual and methodological approaches that shape current debates on algorithmic fairness. Using the PRISMA methodology, it synthesizes diverse strands of research to assess how bias is theorized, measured, and mitigated across disciplines. The findings reveal a persistent lack of consensus on how algorithmic bias and fairness should be defined, leading to challenges in both theoretical articulation and practical implementation. In response, this study advocates for a multilevel and multidimensional approach to bias mitigation, one that integrates computational, ethical, and social dimensions. This framework aligns with the Sustainable Development Goals (SDG 5—gender equality and SDG 10—reduced inequalities), underscoring the necessity of interdisciplinary strategies that move beyond purely technical interventions. By foregrounding a socio-technical perspective, this review highlights the urgency of designing AI systems that not only mitigate bias but also actively contribute to the construction of more equitable and inclusive technological ecosystems.
Algorithmic bias, fairness, and inclusivity: a multilevel framework for justice-oriented AI / Panarese, Paola; Grasso, Marta Margherita; Solinas, Claudia. - In: AI & SOCIETY. - ISSN 1435-5655. - (2025), pp. 1-23. [10.1007/s00146-025-02451-2]
Algorithmic bias, fairness, and inclusivity: a multilevel framework for justice-oriented AI
Paola Panarese
;Marta Margherita Grasso;Claudia Solinas
2025
Abstract
The increasing integration of Artificial Intelligence (AI) into decision-making processes has raised concerns about the reproduction of gender and ethnic biases within algorithmic systems. While a growing body of research has addressed this issue, the field remains fragmented due to the absence of a unified conceptual framework for bias, fairness, and inclusivity. This lack of definitional clarity hinders the development of effective mitigation strategies and exacerbates epistemological and operational inconsistencies. To address this gap, this study conducts a scoping review of the literature on algorithmic bias, adopting a socio-technical perspective to map existing research and identify critical gaps. By systematically analyzing a broad range of scholarly contributions, the review explores the conceptual and methodological approaches that shape current debates on algorithmic fairness. Using the PRISMA methodology, it synthesizes diverse strands of research to assess how bias is theorized, measured, and mitigated across disciplines. The findings reveal a persistent lack of consensus on how algorithmic bias and fairness should be defined, leading to challenges in both theoretical articulation and practical implementation. In response, this study advocates for a multilevel and multidimensional approach to bias mitigation, one that integrates computational, ethical, and social dimensions. This framework aligns with the Sustainable Development Goals (SDG 5—gender equality and SDG 10—reduced inequalities), underscoring the necessity of interdisciplinary strategies that move beyond purely technical interventions. By foregrounding a socio-technical perspective, this review highlights the urgency of designing AI systems that not only mitigate bias but also actively contribute to the construction of more equitable and inclusive technological ecosystems.I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.


