Background The widespread diffusion of Artificial Intelligence (AI) platforms is revolutionizing how health-related information is disseminated, thereby highlighting the need for tools to evaluate the quality of such information. This study aimed to propose and validate the Quality Assessment of Medical Artificial Intelligence (QAMAI), a tool specifically designed to assess the quality of health information provided by AI platforms. Methods The QAMAI tool has been developed by a panel of experts following guidelines for the development of new questionnaires. A total of 30 responses from ChatGPT4, addressing patient queries, theoretical questions, and clinical head and neck surgery scenarios were assessed by 27 reviewers from 25 academic centers worldwide. Construct validity, internal consistency, inter-rater and test-retest reliability were assessed to validate the tool. Results The validation was conducted on the basis of 792 assessments for the 30 responses given by ChatGPT4. The results of the exploratory factor analysis revealed a unidimensional structure of the QAMAI with a single factor comprising all the items that explained 51.1% of the variance with factor loadings ranging from 0.449 to 0.856. Overall internal consistency was high (Cronbach's alpha = 0.837). The Interclass Correlation Coefficient was 0.983 (95% CI 0.973-0.991; F (29,542) = 68.3; p < 0.001), indicating excellent reliability. Test-retest reliability analysis revealed a moderate-to-strong correlation with a Pearson's coefficient of 0.876 (95% CI 0.859-0.891; p < 0.001). Conclusions The QAMAI tool demonstrated significant reliability and validity in assessing the quality of health information provided by AI platforms. Such a tool might become particularly important/useful for physicians as patients increasingly seek medical information on AI platforms.

Validation of the quality analysis of medical artificial intelligence (QAMAI) tool. A new tool to assess the quality of health information provided by AI platforms / Vaira, Luigi Angelo; Lechien, Jerome R.; Abbate, Vincenzo; Allevi, Fabiana; Audino, Giovanni; Beltramini, Giada Anna; Bergonzani, Michela; Boscolo-Rizzo, Paolo; Califano, Gianluigi; Cammaroto, Giovanni; Chiesa-Estomba, Carlos M.; Committeri, Umberto; Crimi, Salvatore; Curran, Nicholas R.; di Bello, Francesco; di Stadio, Arianna; Frosolini, Andrea; Gabriele, Guido; Gengler, Isabelle M.; Lonardi, Fabio; Maglitto, Fabio; Mayo-Yáñez, Miguel; Petrocelli, Marzia; Pucci, Resi; Saibene, Alberto Maria; Saponaro, Gianmarco; Tel, Alessandro; Trabalzini, Franco; Trecca, Eleonora M. C.; Vellone, Valentino; Salzano, Giovanni; De Riu, Giacomo. - In: EUROPEAN ARCHIVES OF OTO-RHINO-LARYNGOLOGY. - ISSN 0937-4477. - (2024), pp. 1-9. [10.1007/s00405-024-08710-0]

Validation of the quality analysis of medical artificial intelligence (QAMAI) tool. A new tool to assess the quality of health information provided by AI platforms

di Bello, Francesco;di Stadio, Arianna;Gabriele, Guido;Pucci, Resi;Saponaro, Gianmarco;Vellone, Valentino;
2024

Abstract

Background The widespread diffusion of Artificial Intelligence (AI) platforms is revolutionizing how health-related information is disseminated, thereby highlighting the need for tools to evaluate the quality of such information. This study aimed to propose and validate the Quality Assessment of Medical Artificial Intelligence (QAMAI), a tool specifically designed to assess the quality of health information provided by AI platforms. Methods The QAMAI tool has been developed by a panel of experts following guidelines for the development of new questionnaires. A total of 30 responses from ChatGPT4, addressing patient queries, theoretical questions, and clinical head and neck surgery scenarios were assessed by 27 reviewers from 25 academic centers worldwide. Construct validity, internal consistency, inter-rater and test-retest reliability were assessed to validate the tool. Results The validation was conducted on the basis of 792 assessments for the 30 responses given by ChatGPT4. The results of the exploratory factor analysis revealed a unidimensional structure of the QAMAI with a single factor comprising all the items that explained 51.1% of the variance with factor loadings ranging from 0.449 to 0.856. Overall internal consistency was high (Cronbach's alpha = 0.837). The Interclass Correlation Coefficient was 0.983 (95% CI 0.973-0.991; F (29,542) = 68.3; p < 0.001), indicating excellent reliability. Test-retest reliability analysis revealed a moderate-to-strong correlation with a Pearson's coefficient of 0.876 (95% CI 0.859-0.891; p < 0.001). Conclusions The QAMAI tool demonstrated significant reliability and validity in assessing the quality of health information provided by AI platforms. Such a tool might become particularly important/useful for physicians as patients increasingly seek medical information on AI platforms.
2024
AI; artificial intelligence; ChatGPT; head and neck surgery; health-related information quality; machine learning; maxillofacial surgery; natural language processing; neural networks; otorhinolaryngology
01 Pubblicazione su rivista::01a Articolo in rivista
Validation of the quality analysis of medical artificial intelligence (QAMAI) tool. A new tool to assess the quality of health information provided by AI platforms / Vaira, Luigi Angelo; Lechien, Jerome R.; Abbate, Vincenzo; Allevi, Fabiana; Audino, Giovanni; Beltramini, Giada Anna; Bergonzani, Michela; Boscolo-Rizzo, Paolo; Califano, Gianluigi; Cammaroto, Giovanni; Chiesa-Estomba, Carlos M.; Committeri, Umberto; Crimi, Salvatore; Curran, Nicholas R.; di Bello, Francesco; di Stadio, Arianna; Frosolini, Andrea; Gabriele, Guido; Gengler, Isabelle M.; Lonardi, Fabio; Maglitto, Fabio; Mayo-Yáñez, Miguel; Petrocelli, Marzia; Pucci, Resi; Saibene, Alberto Maria; Saponaro, Gianmarco; Tel, Alessandro; Trabalzini, Franco; Trecca, Eleonora M. C.; Vellone, Valentino; Salzano, Giovanni; De Riu, Giacomo. - In: EUROPEAN ARCHIVES OF OTO-RHINO-LARYNGOLOGY. - ISSN 0937-4477. - (2024), pp. 1-9. [10.1007/s00405-024-08710-0]
File allegati a questo prodotto
File Dimensione Formato  
Vaira_Validation_2024.pdf

accesso aperto

Note: Articolo su rivista
Tipologia: Documento in Post-print (versione successiva alla peer review e accettata per la pubblicazione)
Licenza: Creative commons
Dimensione 825.48 kB
Formato Adobe PDF
825.48 kB Adobe PDF

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11573/1710003
Citazioni
  • ???jsp.display-item.citation.pmc??? 0
  • Scopus 1
  • ???jsp.display-item.citation.isi??? 0
social impact