Background: With the advancement of AI-powered online tools, patients are increasingly turning to AI for guidance on healthcare-related issues. Methods: Acting as patients, we posed eight direct questions concerning a common clinical condition—liver cysts—to four AI chatbots: ChatGPT, Perplexity, Copilot, and Gemini. The responses were collected and compared both among the chatbots and with the current literature, including the most recent guidelines. Results: Overall, the responses from the four chatbots were generally consistent with the literature, with only a few inaccuracies noted. For questions addressing “grey areas” in clinical research, all chatbots provided generalized answers. ChatGPT, Copilot, and Gemini highlighted the lack of conclusive evidence in the literature, while Perplexity offered speculative correlations not supported by data. Importantly, all chatbots recommended consulting a healthcare professional. While Perplexity, Copilot, and Gemini included references in their responses, not all cited sources were academic or of medium/high evidence quality. An analysis of Flesch Readability Ease Scores and Estimated Reading Grade Levels indicated that ChatGPT and Gemini provided the most readable and comprehensible responses. Conclusions: The integration of chatbots into real-world healthcare scenarios requires thorough testing to prevent potentially serious consequences from misuse. While undeniably innovative, this technology presents significant risks if implemented improperly.
Liver Cysts and Artificial Intelligence: Is AI Really a Patient-Friendly Support? / Spalice, Enrico; D'Alterio, Chiara; Lanzone, Maria; Iannone, Immacolata; De Padua, Cristina; De Pastena, Matteo; Coppola, Alessandro. - In: SURGERIES. - ISSN 2673-4095. - 6:3(2025), pp. 1-11. [10.3390/surgeries6030073]
Liver Cysts and Artificial Intelligence: Is AI Really a Patient-Friendly Support?
Spalice, EnricoCo-primo
;D'Alterio, ChiaraCo-primo
;Lanzone, Maria
;Iannone, Immacolata;De Padua, Cristina;Coppola, Alessandro
Ultimo
2025
Abstract
Background: With the advancement of AI-powered online tools, patients are increasingly turning to AI for guidance on healthcare-related issues. Methods: Acting as patients, we posed eight direct questions concerning a common clinical condition—liver cysts—to four AI chatbots: ChatGPT, Perplexity, Copilot, and Gemini. The responses were collected and compared both among the chatbots and with the current literature, including the most recent guidelines. Results: Overall, the responses from the four chatbots were generally consistent with the literature, with only a few inaccuracies noted. For questions addressing “grey areas” in clinical research, all chatbots provided generalized answers. ChatGPT, Copilot, and Gemini highlighted the lack of conclusive evidence in the literature, while Perplexity offered speculative correlations not supported by data. Importantly, all chatbots recommended consulting a healthcare professional. While Perplexity, Copilot, and Gemini included references in their responses, not all cited sources were academic or of medium/high evidence quality. An analysis of Flesch Readability Ease Scores and Estimated Reading Grade Levels indicated that ChatGPT and Gemini provided the most readable and comprehensible responses. Conclusions: The integration of chatbots into real-world healthcare scenarios requires thorough testing to prevent potentially serious consequences from misuse. While undeniably innovative, this technology presents significant risks if implemented improperly.| File | Dimensione | Formato | |
|---|---|---|---|
|
Spalice_Liver-Cysts_2025.pdf
accesso aperto
Tipologia:
Versione editoriale (versione pubblicata con il layout dell'editore)
Licenza:
Creative commons
Dimensione
200.04 kB
Formato
Adobe PDF
|
200.04 kB | Adobe PDF |
I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.


