The rapid evolution of artificial intelligence is profoundly permeating and influencing many aspects of our daily lives. A recent and significant breakthrough is the emergence of Large Language Models (LLMs); these models demonstrate the ability to provide human-like responses across a vast range of topics, often serving as a trusted source of information for users. This study aims to investigate the overall level of trust humans place in responses generated by artificial intelligence and explore whether this trust varies depending on the subject matter. To achieve this we presented participants with 36 multiple-choice quandaries divided into three thematic categories: 12 related to entertainment decisions (film, music, and books), 12 concerning practical decisions (legal, financial, and medical domains), and 12 focused on moral decisions (choosing between options that have both positive and negative ethical consequences). For each question, participants could choose between a response attributed to an AI system, expert in the given topic, or a response attributed to a human expert in the same field. The responses were marked as generated by AI randomly. The results revealed a significant preference for answers attributed to human experts across most domains. In conclusion, our study found no evidence of excessive trust in AI but rather identified significant indications of distrust among participants toward AI systems. Future studies will be essential to further investigate the cognitive mechanisms that influence perceived trust in AI. In particular, it will be necessary to examine how cognitive and emotional factors interact in shaping this trust.
Do we trust in Artificial Intelligence? Trust and Decision-Making Behavior across multiple topics / De Marco, Christian; Panno, Angelo; Del Gatto, Claudia; Indraccolo, Allegra; Lanni, Ilenia; Brunetti, Riccardo. - (2025). (Intervento presentato al convegno 4th International Conference on Human and Artificial Rationalities tenutosi a Paris; France).
Do we trust in Artificial Intelligence? Trust and Decision-Making Behavior across multiple topics
Ilenia Lanni;
2025
Abstract
The rapid evolution of artificial intelligence is profoundly permeating and influencing many aspects of our daily lives. A recent and significant breakthrough is the emergence of Large Language Models (LLMs); these models demonstrate the ability to provide human-like responses across a vast range of topics, often serving as a trusted source of information for users. This study aims to investigate the overall level of trust humans place in responses generated by artificial intelligence and explore whether this trust varies depending on the subject matter. To achieve this we presented participants with 36 multiple-choice quandaries divided into three thematic categories: 12 related to entertainment decisions (film, music, and books), 12 concerning practical decisions (legal, financial, and medical domains), and 12 focused on moral decisions (choosing between options that have both positive and negative ethical consequences). For each question, participants could choose between a response attributed to an AI system, expert in the given topic, or a response attributed to a human expert in the same field. The responses were marked as generated by AI randomly. The results revealed a significant preference for answers attributed to human experts across most domains. In conclusion, our study found no evidence of excessive trust in AI but rather identified significant indications of distrust among participants toward AI systems. Future studies will be essential to further investigate the cognitive mechanisms that influence perceived trust in AI. In particular, it will be necessary to examine how cognitive and emotional factors interact in shaping this trust.I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.


