Introduction: Electronic patient messaging utilization has increased in recent years and has been associated with physician burnout. ChatGPT is a language model that has shown the ability to generate near-human level text responses. This study evaluated the quality of ChatGPT responses to real-world urology patient messages. Methods: One hundred electronic patient messages were collected from a practicing urologist's inbox and categorized based on the question content. Individual responses were generated by entering each message into ChatGPT. The questions and responses were independently evaluated by 5 urologists and graded on a 5-point Likert scale. Questions were graded based on difficulty, and responses were graded based on accuracy, completeness, harmfulness, helpfulness, and intelligibleness. Whether or not the response could be sent to a patient was also assessed. Results: Overall, 47% of responses were deemed acceptable to send to patients. ChatGPT performed better on easy questions with 56% of responses to easy questions being acceptable to send as compared to 34% of difficult questions (P =.03). Responses to easy questions were more accurate, complete, helpful, and intelligible than responses to difficult questions. There was no difference in response quality based on question content. Conclusions: ChatGPT generated acceptable responses to nearly 50% of patient messages with better performance for easy questions compared to difficult questions. Use of ChatGPT to help respond to patient messages can help to decrease the time burden for the care team and improve wellness. Artificial intelligence performance will likely continue to improve with advances in generative artificial intelligence technology.

Assessing Artificial Intelligence–Generated Responses to Urology Patient In-Basket Messages / Scott, Michael; Muncey, Wade; Seranio, Nicolas; Belladelli, Federico; Del Giudice, Francesco; Li, Shufeng; Ha, Albert; Glover, Frank; Zhang, Chiyuan Amy; Eisenberg, Michael L.. - In: UROLOGY PRACTICE. - ISSN 2352-0787. - 11:5(2024), pp. 793-798. [10.1097/upj.0000000000000637]

Assessing Artificial Intelligence–Generated Responses to Urology Patient In-Basket Messages

Del Giudice, Francesco;
2024

Abstract

Introduction: Electronic patient messaging utilization has increased in recent years and has been associated with physician burnout. ChatGPT is a language model that has shown the ability to generate near-human level text responses. This study evaluated the quality of ChatGPT responses to real-world urology patient messages. Methods: One hundred electronic patient messages were collected from a practicing urologist's inbox and categorized based on the question content. Individual responses were generated by entering each message into ChatGPT. The questions and responses were independently evaluated by 5 urologists and graded on a 5-point Likert scale. Questions were graded based on difficulty, and responses were graded based on accuracy, completeness, harmfulness, helpfulness, and intelligibleness. Whether or not the response could be sent to a patient was also assessed. Results: Overall, 47% of responses were deemed acceptable to send to patients. ChatGPT performed better on easy questions with 56% of responses to easy questions being acceptable to send as compared to 34% of difficult questions (P =.03). Responses to easy questions were more accurate, complete, helpful, and intelligible than responses to difficult questions. There was no difference in response quality based on question content. Conclusions: ChatGPT generated acceptable responses to nearly 50% of patient messages with better performance for easy questions compared to difficult questions. Use of ChatGPT to help respond to patient messages can help to decrease the time burden for the care team and improve wellness. Artificial intelligence performance will likely continue to improve with advances in generative artificial intelligence technology.
2024
artificial intelligence; electronic medical record; quality improvement; urology
01 Pubblicazione su rivista::01a Articolo in rivista
Assessing Artificial Intelligence–Generated Responses to Urology Patient In-Basket Messages / Scott, Michael; Muncey, Wade; Seranio, Nicolas; Belladelli, Federico; Del Giudice, Francesco; Li, Shufeng; Ha, Albert; Glover, Frank; Zhang, Chiyuan Amy; Eisenberg, Michael L.. - In: UROLOGY PRACTICE. - ISSN 2352-0787. - 11:5(2024), pp. 793-798. [10.1097/upj.0000000000000637]
File allegati a questo prodotto
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11573/1733460
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo

Citazioni
  • ???jsp.display-item.citation.pmc??? 6
  • Scopus 12
  • ???jsp.display-item.citation.isi??? 12
social impact