Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
A Cross-Sectional Study to Evaluate the Effectiveness of Patient Information Guides Produced by ChatGPT Versus Google Gemini for Three Pediatric Illnesses
3
Zitationen
4
Autoren
2025
Jahr
Abstract
OBJECTIVES: Educating pediatric patients and their caregivers about the disease is crucial for improving treatment adherence, recognizing complications early, and alleviating anxiety. AI tools such as ChatGPT and Google Gemini offer personalized education, benefiting patients and providers, and are increasingly utilized in healthcare. This study aims to compare patient education guides created by ChatGPT and Google Gemini for acute otitis media, pneumonia, and pharyngitis. METHODS: Patient information guides on pediatric diseases generated by ChatGPT and Google Gemini were evaluated by comparing various variables (words, sentences, average words per sentence, average syllables per word, grade level, and ease score) and further assessed for ease using the Flesch-Kincaid calculator, similarity using Quillbot, and reliability using the Modified Discern score. Statistical analysis was done using R v4.3.2. RESULTS: Both tools' responses were statistically compared. No significant difference was found in word count (ChatGPT: 477.3; Google Gemini: 394.0; p=0.0765) or sentences (ChatGPT: 35.33; Google Gemini: 46.33; p=0.184). Google Gemini scored slightly higher in ease (ChatGPT: 37.79; Google Gemini: 57.10) and grade level (ChatGPT: 11.40; Google Gemini: 7.43), but these were not statistically significant (p>0.05), indicating no clear superiority. CONCLUSIONS FOR PRACTICE: In a comparison of patient education guides created by both tools for acute otitis media, pneumonia, and pharyngitis, there was no statistically significant difference to determine the superiority of one AI tool over the other. Further studies should comprehensively evaluate various AI tools across a broader range of diseases. It is also important to assess whether AI tools can provide real-time, verifiable content based on the latest medical advancements.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.644 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.550 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 8.061 Zit.
BioBERT: a pre-trained biomedical language representation model for biomedical text mining
2019 · 6.850 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.781 Zit.