Design Justice is an approach to technology aimed at mitigating harm and generating value for historically marginalized groups (e.g., Costanza-Chock, 2020). While AI offers opportunities for empowerment, developing inclusive and accessible solutions without direct user involvement risks exacerbating conditions of epistemic injustice (Fricker, 2007), which are particularly severe for already marginalized groups (Scully, 2020). We approach the problem pragmatically, interweaving elements of communication, linguistics, commerce, and democracy. Current AI discourse rarely addresses the homogenization of language(s) used to train models, rendering invisible immigrants without fluency in a given national spoken language and deaf communities who use sign languages and other communication modalities . Emerging deaf leadership in accessibility technology research has recently identified systemic bias in sign language AI research, including the encoding of bias stemming from the positionality of h/Hearing researchers, who often focus on perceived communication barriers rather than centering sign language and d/Deaf ways of knowing and being (Desai et al 2024). Such implicit bias affects social, organizational and ethical dimensions of human communication, from interpersonal interactions to sociotechnical systems. To address the specific matter of automated interpreting, two independent volunteer groups convened in the fall of 2023: the Advisory Group on AI and Sign Language Interpreting and the Interpreting SAFE AI Task Force (Stakeholders Advocating for Fairness and Ethics). These groups have since collaborated to generate reports, webinars, a 2-day symposium, and numerous presentations at conferences throughout the language industry, aiming to establish guidance for a fundamental legal framework in the highly sensitive area of plurilingual human interaction – communication that inherently involves the use of more than one language. This work requires distinguishing the crucial differences between human interpreting and machine translation, as well as exploring algorithmic, software and human-computer interface design solutions for mediating the gaps. This presentation introduces #DeafSafeAIxAI, a framework for automated interpreting by artificial intelligence that prioritizes safety, accountability, fairness, and ethics for applications involving deaf individuals and, by extension, users of marginalized or minoritized languages (also called low-resource languages or languages of lesser diffusion). We openly contend with the engineers, researchers, and companies producing AI tools, processes, and relations to redesign technology-mediated communication, ensuring these systems serve the collective public good and contribute to a robust, sustainable economy.

#DeafSafeAI: an Invitation to design justice / Stephanie Jo Kent, ; Winston, Betsy; Ponce, Celena; Zuccala, Amir; Glass, Molly; Herold, Brienna. - (2024). (Intervento presentato al convegno Workshop Sociotechnical Consequences of AI: An Interdisciplinary Exploration of Ethical, Organizational, Social, and Computational Dimensions. tenutosi a University of North Carolina at Chapel Hill. International Center for Ethics in the Sciences and Humanities (IZEW) together with the UNC School for Information and Library Science.).

#DeafSafeAI: an Invitation to design justice

Amir Zuccala;
2024

Abstract

Design Justice is an approach to technology aimed at mitigating harm and generating value for historically marginalized groups (e.g., Costanza-Chock, 2020). While AI offers opportunities for empowerment, developing inclusive and accessible solutions without direct user involvement risks exacerbating conditions of epistemic injustice (Fricker, 2007), which are particularly severe for already marginalized groups (Scully, 2020). We approach the problem pragmatically, interweaving elements of communication, linguistics, commerce, and democracy. Current AI discourse rarely addresses the homogenization of language(s) used to train models, rendering invisible immigrants without fluency in a given national spoken language and deaf communities who use sign languages and other communication modalities . Emerging deaf leadership in accessibility technology research has recently identified systemic bias in sign language AI research, including the encoding of bias stemming from the positionality of h/Hearing researchers, who often focus on perceived communication barriers rather than centering sign language and d/Deaf ways of knowing and being (Desai et al 2024). Such implicit bias affects social, organizational and ethical dimensions of human communication, from interpersonal interactions to sociotechnical systems. To address the specific matter of automated interpreting, two independent volunteer groups convened in the fall of 2023: the Advisory Group on AI and Sign Language Interpreting and the Interpreting SAFE AI Task Force (Stakeholders Advocating for Fairness and Ethics). These groups have since collaborated to generate reports, webinars, a 2-day symposium, and numerous presentations at conferences throughout the language industry, aiming to establish guidance for a fundamental legal framework in the highly sensitive area of plurilingual human interaction – communication that inherently involves the use of more than one language. This work requires distinguishing the crucial differences between human interpreting and machine translation, as well as exploring algorithmic, software and human-computer interface design solutions for mediating the gaps. This presentation introduces #DeafSafeAIxAI, a framework for automated interpreting by artificial intelligence that prioritizes safety, accountability, fairness, and ethics for applications involving deaf individuals and, by extension, users of marginalized or minoritized languages (also called low-resource languages or languages of lesser diffusion). We openly contend with the engineers, researchers, and companies producing AI tools, processes, and relations to redesign technology-mediated communication, ensuring these systems serve the collective public good and contribute to a robust, sustainable economy.
2024
File allegati a questo prodotto
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11573/1720366
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo

Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus ND
  • ???jsp.display-item.citation.isi??? ND
social impact