Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Enhancing pediatric asthma management in underdeveloped regions through ChatGPT training for doctors: a randomized controlled trial
2
Zitationen
11
Autoren
2025
Jahr
Abstract
Background: Childhood asthma represents a significant challenge globally, especially in underdeveloped regions. Recent advancements in Large Language Models (LLMs), such as ChatGPT, offer promising improvements in medical service quality. Methods: This randomized controlled trial assessed the effectiveness of ChatGPT in enhancing physicians' childhood asthma management skills. A total of 192 doctors from varied healthcare environments in China were divided into a control group, receiving traditional medical literature training, and an intervention group, trained in utilizing ChatGPT. Assessments conducted before and after training, and a 2-week follow-up, measured the training's impact. Results: The intervention group showed significant improvement, with scores of test questions increasing by approximately 20 out of 100 (improving to 72 ± 8 from a baseline, vs. the control group's increase to 50 ± 9). Post-training, ChatGPT's regular usage among the intervention group jumped from 6.3% to 62%, markedly above the control group's 4.3%. Moreover, physicians in the intervention group reported higher levels of familiarity, effectiveness, satisfaction, and intention for future use of ChatGPT. Conclusion: ChatGPT training significantly improves childhood asthma management among physicians in underdeveloped regions. This underscores the utility of LLMs like ChatGPT as effective educational tools in medical training, highlighting the need for further research into their integration and patient outcome impacts.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.644 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.550 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 8.061 Zit.
BioBERT: a pre-trained biomedical language representation model for biomedical text mining
2019 · 6.850 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.781 Zit.