OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 13.04.2026, 03:29

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

AI Chatbots for Patient Education on Anaesthesia- Free Capsule Endoscopy: A Comparative Readability and Quality Study (Preprint)

2026·0 ZitationenOpen Access
Volltext beim Verlag öffnen

0

Zitationen

4

Autoren

2026

Jahr

Abstract

<sec> <title>BACKGROUND</title> Fully Automated Magnetically Controlled Capsule Endoscopy (FAMCCE) is a minimally invasive gastrointestinal diagnostic procedure that is typically performed without general anaesthesia. Despite this, misconceptions regarding anaesthesia requirements remain common and may contribute to patient anxiety and reduced acceptance. Artificial intelligence (AI) chatbots are increasingly used as sources of health information, yet their effectiveness in addressing anaesthesia related misconceptions about FAMCCE has not been well studied. </sec> <sec> <title>OBJECTIVE</title> To evaluate and compare the readability, information quality, and patient centred suitability of widely accessible AI chatbots in explaining the anaesthesia free nature of FAMCCE. </sec> <sec> <title>METHODS</title> Five publicly available large language model based chatbots were assessed using twelve standardised, patient oriented prompts focusing on anaesthesia requirements, comfort, safety, and procedural expectation. Responses were analysed using established readability metrices (Flesch Reading Ease score, Flesch Kincaid Grade Level, Coleman Liau Index), the DISCERN instrument, and a five item Likert scale evaluating clarity, comprehensiveness, readability, patient friendliness, and informativeness. Evaluations were independently performed by three clinicians, and results were analysed using descriptive statistics and paired t tests. </sec> <sec> <title>RESULTS</title> Significant variation was observed across chatbots. ChatGPT 5.2, Google Gemini Fast, and Microsoft Copilot Smart demonstrated comparable overall suitability for patient education, whereas Claude 4.5 produced more linguistically complex responses and Perplexity AI showed scored lower readability and subjective quality measures. Higher DISCERN scores were associated with greater informational depth but increased linguistic complexity. </sec> <sec> <title>CONCLUSIONS</title> AI chatbots differ substantially in their ability to communicate clear, accessible and patient friendly information regarding anaesthesia free FAMCCE. While several platforms show promise as supplementary educational tools, they should complement rather than replace clinician lead counselling. Further patient centred and longitudinal research is required. </sec>

Ähnliche Arbeiten

Autoren

Themen

Artificial Intelligence in Healthcare and EducationAI in Service InteractionsElectronic Health Records Systems
Volltext beim Verlag öffnen