Artificial intelligence (AI) has completely disrupted the entirety of the modern life. For researchers in computer science, AI experts, and professionals in a variety of fields, AI is nowadays an intriguing subject of study. However, leveraging AI techniques exposes users to potential risks as these techniques ordinarily function as black boxes, making users prey to inappropriate modeling methods as well as biased decisionmaking. To address this issue, the key objective of Explainable AI (XAI) frameworks is to render systems’ decisions comprehensible to the users so as to make them trustworthy and dependable. With the goal to present an extensive overview, this systematic review attempts to incorporate the majority of the XAI literature. A framework-based review of 122 articles was performed to accomplish this using the TCM-ADO framework (Theory, Context, Methods and Antecedents, Decision, Outcomes). The study further seeks to accentuate the significant gaps in the literature and offer specific recommendations for further research. These results and recommendations can be used by researchers in the fields of computer science, social sciences and others. By displacing the existing black-box AI with XAI, the integration of XAI into modern businesses has the ability to bring about social transformation by radically altering people’s perceptions of AI.

Decrypting disruptive technologies. Review and research agenda of explainable AI as a game changer / Dabas, Vidushi; Thomas, Asha; Khatri, Puja; Iandolo, Francesca; Usai, Antonio. - (2023). (Intervento presentato al convegno Conference: 2023 IEEE International Conference on Technology Management, Operations and Decisions (ICTMOD) tenutosi a Rabat, Marocco) [10.1109/ictmod59086.2023.10438156].

Decrypting disruptive technologies. Review and research agenda of explainable AI as a game changer

Iandolo, Francesca;
2023

Abstract

Artificial intelligence (AI) has completely disrupted the entirety of the modern life. For researchers in computer science, AI experts, and professionals in a variety of fields, AI is nowadays an intriguing subject of study. However, leveraging AI techniques exposes users to potential risks as these techniques ordinarily function as black boxes, making users prey to inappropriate modeling methods as well as biased decisionmaking. To address this issue, the key objective of Explainable AI (XAI) frameworks is to render systems’ decisions comprehensible to the users so as to make them trustworthy and dependable. With the goal to present an extensive overview, this systematic review attempts to incorporate the majority of the XAI literature. A framework-based review of 122 articles was performed to accomplish this using the TCM-ADO framework (Theory, Context, Methods and Antecedents, Decision, Outcomes). The study further seeks to accentuate the significant gaps in the literature and offer specific recommendations for further research. These results and recommendations can be used by researchers in the fields of computer science, social sciences and others. By displacing the existing black-box AI with XAI, the integration of XAI into modern businesses has the ability to bring about social transformation by radically altering people’s perceptions of AI.
2023
Conference: 2023 IEEE International Conference on Technology Management, Operations and Decisions (ICTMOD)
explainable AI; black-box AI; XAI; explainability; artificial intelligence; SPAR-4-SLR; TCM-ADO framework
04 Pubblicazione in atti di convegno::04b Atto di convegno in volume
Decrypting disruptive technologies. Review and research agenda of explainable AI as a game changer / Dabas, Vidushi; Thomas, Asha; Khatri, Puja; Iandolo, Francesca; Usai, Antonio. - (2023). (Intervento presentato al convegno Conference: 2023 IEEE International Conference on Technology Management, Operations and Decisions (ICTMOD) tenutosi a Rabat, Marocco) [10.1109/ictmod59086.2023.10438156].
File allegati a questo prodotto
File Dimensione Formato  
Iandolo_Decrypting-disruptive-technologies_2023.pdf

solo gestori archivio

Tipologia: Versione editoriale (versione pubblicata con il layout dell'editore)
Licenza: Tutti i diritti riservati (All rights reserved)
Dimensione 832.86 kB
Formato Adobe PDF
832.86 kB Adobe PDF   Contatta l'autore

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11573/1702890
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 0
  • ???jsp.display-item.citation.isi??? ND
social impact