OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 12.05.2026, 08:02

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Enhancing health literacy for people with cystic fibrosis (CF): Quality and readability of microbiology information generated by artificial intelligence (AI) platforms - A cross sectional infodemiology study

2026·0 ZitationenOpen Access
Volltext beim Verlag öffnen

0

Zitationen

3

Autoren

2026

Jahr

Abstract

ntroduction: Cystic fibrosis (CF) is the most common lethal autosomal recessive genetic condition that predominantly affects European populations. Microbiology and infection plays a major role in the morbidity and mortality associated with this condition, largely dominated by environmental bacteria including Pseudomonas aeruginosa, Burkholderia cenocepacia, Stenotrophomonas maltophilia and Achromobacter xylosooxidans. The pathophysiology of the disease is complex, which makes explaining it to patients difficult. The emergence of artificial intelligence allows the opportunity for large language models to answer microbiology and infection-related questions, posed by CF patients and tailoring responses to align with reading age. The aim of this study was therefore to explore how AI and AI platforms can be used to safely prepare microbiology-related and other cystic fibrosis (CF)-related healthcare information for people with CF, their parents, family and friends. Methods: . ChatGPT 4.0, Google’s Gemini, and Google’s AI Overview were compared by asking each to provide responses to microbiology/infection-related (n=25) and other patient-prompted CF questions (n=25). AI-generated responses were checked for readability (Flesch Reading Ease-FRES; Flesch Kincaid Grade Level-FKGL), accuracy and completeness. Results: For the AI-generated responses to the microbiology questions (n=25), the mean FRES scores were 34.30±14.53 (Gemini), 25.72±19.31 (AI Overview), and 19.46±9.06 (ChatGPT 4.0) and the mean FKGL scores were 11.54±2.51, 12.74±3.12, and 15.10±1.66, respectively. The FRES and FKGL scores of general CF questions (n=25) showed that Gemini produced the most readable answers, followed by Google’s AI overview, and ChatGPT 4.0. The mean FRES scores were 48.47±11.06, 46.65±15.88, and 34.20±9.26 and the mean FKGL scores were 9.52±1.85, 9.66±2.53, and 12.98±1.60 respectively. Statistical analysis for both showed these differences were statistically significant except between AI overview and Gemini. ChatGPT was the most accurate and complete, followed by Gemini and AI overview. Discussion: ChatGPT 4.0 has shown it can produce information that is highly accurate and complete. The readability, however, did not meet the target reference standards and further research is required to ascertain if readability can be improved by using simplification and detailed sentence commands. AI platforms may become a valuable tool in the dissemination/generation of microbiology-related and other CF-related health information that safely supports health literacy within the lay patient community.

Ähnliche Arbeiten

Autoren

Institutionen

Themen

Health Literacy and Information AccessibilityArtificial Intelligence in Healthcare and EducationMobile Health and mHealth Applications
Volltext beim Verlag öffnen