OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 14.05.2026, 02:12

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Evaluating the ability of AI chatbots to provide informed consent information for common oncological surgeries

2025·1 Zitationen·Annals of The Royal College of Surgeons of EnglandOpen Access
Volltext beim Verlag öffnen

1

Zitationen

5

Autoren

2025

Jahr

Abstract

INTRODUCTION: Informed consent is fundamental to oncological surgery, but communication is often hindered by medical terminology, inconsistent explanations and variation in patient understanding. Large language models may improve accessibility by generating simplified consent information. This study assessed whether four leading artificial intelligence (AI) chatbots, ChatGPT (GPT-4), Gemini (2.5 Flash), DeepSeek (R1) and Grok (3) could generate information understandable to patients and comprehensive enough to support informed consent for six common oncological operations. METHODS: Standardised patient-style prompts were applied, and chatbot outputs were evaluated for readability using the Flesch Reading Ease Score (FRES), Flesch-Kincaid Grade Level (FKGL) and Gunning Fog Index (GF). Quality and completeness, including coverage of procedure details, risks, benefits, alternatives and consequences of no treatment, were assessed by three consultant surgeons using a modified DISCERN instrument. RESULTS: Gemini produced the highest quality information (mean DISCERN 72.3 ± 3.0), followed by Grok (63.0 ± 1.8), whereas ChatGPT (48.0 ± 4.7) and DeepSeek (47.1 ± 1.8) performed less well. DeepSeek generated the most readable content (FKGL 9.7; GF 10.8), although no model achieved the recommended sixth-grade level. Common limitations included the lack of systematic referencing (except Gemini), occasional factual inaccuracies, reliance on predominantly US-based resources, and failure to assess patient understanding. CONCLUSION: Overall, AI chatbots can provide structured, accessible information to support surgical consent, but current limitations restrict their use as standalone tools. Gemini demonstrated the strongest balance of readability and quality, yet all models require refinement to improve reliability, equity, and patient safety. At present, AI should complement, rather than replace, clinician-led consent discussions.

Ähnliche Arbeiten

Autoren

Institutionen

Themen

Artificial Intelligence in Healthcare and EducationAI in Service InteractionsDigital Mental Health Interventions
Volltext beim Verlag öffnen