Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Omission and hallucination prevalence of clinical guidelines in diagnostic large language model outputs
0
Zitationen
8
Autoren
2026
Jahr
Abstract
OBJECTIVE: Meaningful assessments of how large language models (LLMs) incorporate clinical guidelines require large-scale testing over many queries. Here, we evaluate the prevalence of clinical guideline omissions and hallucinations in a large sample of diagnostic LLM outputs. METHODS: We used simulated case vignettes and zero-shot prompting to generate diagnostic outputs and rationales from GPT-4.1 and DeepSeek-V3. English case vignettes were created for hypercholesterolaemia and type-2 diabetes mellitus. Each vignette contained identical medical information, while sociodemographic characteristics varied in terms of sex, ethnicity and location. We calculated the prevalence of existing and hallucinated clinical guidelines in LLM outputs across disease, LLM and sociodemographic characteristics. RESULTS: We analysed a total of 12 197 LLM outputs, which quantifies three hazard areas: omissions (up to 97% for DeepSeek-V3 and 46% for GPT-4.1), hallucinations (up to 9%) and inconsistencies (guideline citation rate ranging from 0% to 78.39% across sociodemographic vignettes). Omission and hallucination rates were generally similar across vignettes with different sex or ethnicity data, yet were particularly sensitive to patient location. DISCUSSION: This study highlights significant variability in clinical guideline prediction across two different diseases, three different sociodemographic variables and two LLMs, even when the LLMs were instructed by identical prompts, establishing clinical guideline prediction in LLM outputs as a stochastic event. CONCLUSION: The stochastic nature of LLMs creates a unique challenge for evidence generation and clinical deployment. Being able to measure and capture this stochasticity within high-quality research designs will be a prerequisite to advancing the responsible deployment of LLMs in healthcare.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.652 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.567 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 8.083 Zit.
BioBERT: a pre-trained biomedical language representation model for biomedical text mining
2019 · 6.856 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.781 Zit.