Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
The Potential of <scp>ChatGPT</scp> as a Source of Information for Kidney Transplant Recipients and Their Caregivers
4
Zitationen
7
Autoren
2025
Jahr
Abstract
BACKGROUND: Education and enhancing the knowledge of adolescents who will undergo kidney transplantation are among the primary objectives of their care. While there are specific interventions in place to achieve this, they require extensive resources. The rise of large language models like ChatGPT-3.5 offers potential assistance for providing information to patients. This study aimed to evaluate the accuracy, relevance, and safety of ChatGPT-3.5's responses to patient-centered questions about pediatric kidney transplantation. The objective was to assess whether ChatGPT-3.5 could be a supplementary educational tool for adolescents and their caregivers in a complex medical context. METHODS: A total of 37 questions about kidney transplantation were presented to ChatGPT-3.5, which was prompted to respond as a health professional would to a layperson. Five pediatric nephrologists independently evaluated the outputs for accuracy, relevance, comprehensiveness, understandability, readability, and safety. RESULTS: The mean accuracy, relevancy, and comprehensiveness scores for all outputs were 4.51, 4.56, and 4.55, respectively. Out of 37 outputs, four were rated as completely accurate, and seven were completely relevant and comprehensive. Only one output had an accuracy, relevancy, and comprehensiveness score below 4. Twelve outputs were considered potentially risky, but only three had a risk grade of moderate or higher. Outputs that were considered risky had an accuracy and relevancy below the average. CONCLUSION: Our findings suggest that ChatGPT could be a useful tool for adolescents or caregivers of individuals waiting for kidney transplantation. However, the presence of potentially risky outputs underscores the necessity for human oversight and validation.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.646 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.554 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 8.071 Zit.
BioBERT: a pre-trained biomedical language representation model for biomedical text mining
2019 · 6.851 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.781 Zit.