Trustworthy Artificial Intelligence (AI) is a cornerstone of the digital era, encompassing the need for AI systems to be not only powerful but also transparent, resilient, and accountable. This thesis, titled Architectural Components of Trustworthy Artificial Intelligence, aims to explore the essential elements that underpin the development of AI systems that are inherently trustworthy. This work unfolds the foundations, methodologies, and innovations crucial for fostering trust in AI systems. The following summary provides an overview of the key contributions and insights of this thesis. The introduction provides a backdrop to the research, elucidating the motivations and objectives driving the study. It outlines the structure of the thesis, setting the stage for a systematic exploration. Starting our exploration, we delve into the fundamentals of Explainability-by-design. We introduce innovative concepts, including a novel generalization of artificial neurons, that redefine the foundations of model transparency. Furthermore, we investigate concept-based explainability, shedding light on how these networks provide insight into the decision-making processes of AI models. Turning our attention to the critical aspect of training trustworthy AI, we explore the development of loss functions tailored to address the challenges posed by noisy labels and missing data, particularly in recommender systems. We also show how integrating item relevance into the loss functions makes the model more resilient and dependable in the face of adversities. We then broaden our investigation introducing the concept of Trustworthy Auxiliary Frameworks: it extends beyond model-centric trustworthiness by incorporating elements such as counterfactual personalized recourse, active learning for misinformation detection, and retrieval augmentation. These auxiliary components address crucial aspects like data governance, monitoring, and interpretability, strengthening the AI system's trustworthiness throughout its lifecycle. The final part of this thesis summarizes key findings and contributions to the field of Trustworthy AI. It shows how we achieved the objectives outlined in the introduction, advancing the understanding and practical implementation of architectural components that enhance trustworthiness in AI systems across diverse domains. It also offers insights into future research directions, emphasizing the need for ongoing innovation and development in this critical domain. In conclusion, this thesis represents a significant step in the ongoing pursuit of Trustworthy AI. It stands as a valuable resource for researchers and practitioners striving to create AI systems that inspire trust and confidence. With the principles of trust, accountability, and transparency at its core, this research contributes to the collective effort of ensuring that AI serves humanity with the highest standards of ethics and responsibility.

Architectural components of trustworthy Artificial Intelligence / Siciliano, Federico. - (2024 Jan 31).

Architectural components of trustworthy Artificial Intelligence

SICILIANO, FEDERICO
31/01/2024

Abstract

Trustworthy Artificial Intelligence (AI) is a cornerstone of the digital era, encompassing the need for AI systems to be not only powerful but also transparent, resilient, and accountable. This thesis, titled Architectural Components of Trustworthy Artificial Intelligence, aims to explore the essential elements that underpin the development of AI systems that are inherently trustworthy. This work unfolds the foundations, methodologies, and innovations crucial for fostering trust in AI systems. The following summary provides an overview of the key contributions and insights of this thesis. The introduction provides a backdrop to the research, elucidating the motivations and objectives driving the study. It outlines the structure of the thesis, setting the stage for a systematic exploration. Starting our exploration, we delve into the fundamentals of Explainability-by-design. We introduce innovative concepts, including a novel generalization of artificial neurons, that redefine the foundations of model transparency. Furthermore, we investigate concept-based explainability, shedding light on how these networks provide insight into the decision-making processes of AI models. Turning our attention to the critical aspect of training trustworthy AI, we explore the development of loss functions tailored to address the challenges posed by noisy labels and missing data, particularly in recommender systems. We also show how integrating item relevance into the loss functions makes the model more resilient and dependable in the face of adversities. We then broaden our investigation introducing the concept of Trustworthy Auxiliary Frameworks: it extends beyond model-centric trustworthiness by incorporating elements such as counterfactual personalized recourse, active learning for misinformation detection, and retrieval augmentation. These auxiliary components address crucial aspects like data governance, monitoring, and interpretability, strengthening the AI system's trustworthiness throughout its lifecycle. The final part of this thesis summarizes key findings and contributions to the field of Trustworthy AI. It shows how we achieved the objectives outlined in the introduction, advancing the understanding and practical implementation of architectural components that enhance trustworthiness in AI systems across diverse domains. It also offers insights into future research directions, emphasizing the need for ongoing innovation and development in this critical domain. In conclusion, this thesis represents a significant step in the ongoing pursuit of Trustworthy AI. It stands as a valuable resource for researchers and practitioners striving to create AI systems that inspire trust and confidence. With the principles of trust, accountability, and transparency at its core, this research contributes to the collective effort of ensuring that AI serves humanity with the highest standards of ethics and responsibility.
31-gen-2024
File allegati a questo prodotto
File Dimensione Formato  
Tesi_dottorato_Siciliano.pdf

accesso aperto

Note: Tesi completa
Tipologia: Tesi di dottorato
Licenza: Tutti i diritti riservati (All rights reserved)
Dimensione 6.08 MB
Formato Adobe PDF
6.08 MB Adobe PDF

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11573/1708856
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus ND
  • ???jsp.display-item.citation.isi??? ND
social impact