Trust is a well-established and extensively studied concept in the humanitarian field, yet its highly subjective nature, dependence on a wide range of influencing factors, and the absence of a single, unified definition pose significant challenges when attempting to apply it to Human-AI teams. However, being able to assess and recognize when to trust an agent is crucial, especially when deciding whether to delegate tasks or to collaborate on more intricate tasks within a team. Existing work has studied the integration of specific computable factors, which are known to influence trust. In this work, we study how these methods transfer to a more concrete Human-AI collaborative puzzle game. In this setting, the AI agent employs a trust-aware Reinforcement Learning algorithm, while the Human agent’s behavior is emulated through the use of a Large Language Model. Our study considers the realistic condition of heterogeneous agents, each skilled in different areas, and we demonstrate that trust-awareness, particularly the ability to make justified decisions about task delegation, is essential for the successful operation of the team. This scenario serves as a testbed for evaluating methods of trust integration and their impact on Human-AI collaboration.

Human-AI Collaboration via Trust Factors: A Collaborative Game Use Case / Fanti, Andrea; Frattolillo, Francesco; Laudati, Rosapia; Patrizi, Fabio; Iocchi, Luca. - 408:(2025), pp. 60-73. ( 4th International Conference on Hybrid Human-Artificial Intelligence, HHAI 2025 Pisa; Italy ) [10.3233/FAIA250625].

Human-AI Collaboration via Trust Factors: A Collaborative Game Use Case

Fanti, Andrea
Primo
;
Frattolillo, Francesco
Secondo
;
Patrizi, Fabio
Penultimo
;
Iocchi, Luca
Ultimo
2025

Abstract

Trust is a well-established and extensively studied concept in the humanitarian field, yet its highly subjective nature, dependence on a wide range of influencing factors, and the absence of a single, unified definition pose significant challenges when attempting to apply it to Human-AI teams. However, being able to assess and recognize when to trust an agent is crucial, especially when deciding whether to delegate tasks or to collaborate on more intricate tasks within a team. Existing work has studied the integration of specific computable factors, which are known to influence trust. In this work, we study how these methods transfer to a more concrete Human-AI collaborative puzzle game. In this setting, the AI agent employs a trust-aware Reinforcement Learning algorithm, while the Human agent’s behavior is emulated through the use of a Large Language Model. Our study considers the realistic condition of heterogeneous agents, each skilled in different areas, and we demonstrate that trust-awareness, particularly the ability to make justified decisions about task delegation, is essential for the successful operation of the team. This scenario serves as a testbed for evaluating methods of trust integration and their impact on Human-AI collaboration.
2025
4th International Conference on Hybrid Human-Artificial Intelligence, HHAI 2025
Human-Robot Interaction; Trust; Reinforcement Learning; Large Language Models; Planning
04 Pubblicazione in atti di convegno::04b Atto di convegno in volume
Human-AI Collaboration via Trust Factors: A Collaborative Game Use Case / Fanti, Andrea; Frattolillo, Francesco; Laudati, Rosapia; Patrizi, Fabio; Iocchi, Luca. - 408:(2025), pp. 60-73. ( 4th International Conference on Hybrid Human-Artificial Intelligence, HHAI 2025 Pisa; Italy ) [10.3233/FAIA250625].
File allegati a questo prodotto
File Dimensione Formato  
Fanti_Human-AI-Collaboration_2025.pdf

accesso aperto

Note: DOI10.3233/FAIA250625
Tipologia: Versione editoriale (versione pubblicata con il layout dell'editore)
Licenza: Creative commons
Dimensione 492.31 kB
Formato Adobe PDF
492.31 kB Adobe PDF

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11573/1748929
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 0
  • ???jsp.display-item.citation.isi??? ND
social impact