Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Evaluating AI-Generated Patient Education Guides: A Comparative Study of ChatGPT and Deepseek
5
Zitationen
2
Autoren
2025
Jahr
Abstract
Introduction Artificial intelligence (AI) chatbots, including ChatGPT and DeepSeek, are becoming popular tools for generating patient education materials for chronic diseases. AI chatbots are useful as supplements to traditional counseling but lack the empathy and intuition of healthcare professionals, making them most effective when used alongside human therapists. The objective of the study is to compare ChatGPT-4o and DeepSeek V3-generated patient educational guides for epilepsy, heart failure, chronic obstructive pulmonary disease (COPD), and chronic kidney disease (CKD). Methodology In this cross-sectional study, the standardized prompts for each disease were entered into ChatGPT and DeepSeek. The resultant texts were evaluated for readability, originality, quality, and suitability. Unpaired t-tests were performed to analyze statistical differences between tools. Results Both AI tools created patient education materials that had similar word and sentence counts, readability scores, reliability, and suitability in all areas, except for the similarity percentage, which was much higher in ChatGPT outputs (p=0.049). The readability scores indicated that both tools produced content that was above the recommended level for patient materials. Both tools resulted in high similarity indices that exceeded accepted academic thresholds. Reliability scores were moderate, and while understandability was high, actionability scores were suboptimal for both models. Conclusion The patient education materials provided by ChatGPT and DeepSeek are similar in nature, but neither satisfies recommended standards for readability, originality, or actionability. Both still need additional fine-tuning and human oversight to enhance accessibility, reliability, and practical utility in clinical settings.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.402 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.270 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.702 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.781 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.507 Zit.