Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Representation Before Retrieval: Structured Patient Artifacts Reduce Hallucination in Clinical AI Systems
0
Zitationen
3
Autoren
2026
Jahr
Abstract
Abstract Background Large language models show promise for clinical decision support, yet their propensity for hallucination—generating plausible but unsupported claims—poses sub-stantial patient safety risks. Retrieval-augmented generation (RAG) is widely assumed to mitigate this problem by grounding outputs in retrieved documents, but this assumption remains inadequately tested in clinical contexts where information density, temporal complexity, and safety stakes are uniquely high. Methods We developed a system that compiles heterogeneous patient data (electronic health records, wearables, genomics, imaging reports) into structured, machine-readable artifacts with explicit provenance tracking across seven clinical domains. We evaluated four conditions: baseline LLM (C0), RAG over raw clinical text (C1), artifact-augmented single-pass generation (C2), and artifact-augmented multi-step agent workflow with verification (C3). Using 100 synthetic patient vignettes evaluated across 3 random seeds ( N = 300 per condition, 1,200 total), we measured unsupported claim rates, factual accuracy, temporal consistency, contraindication detection, and clinical safety metrics using GPT-4o-mini with physician-adjudicated safety review. Results RAG substantially increased hallucination: unsupported claim rates rose from 5.0% (95% CI: 3.8–6.4%) at baseline to 43.6% (95% CI: 40.1–47.2%) with retrieval—an 8.7-fold increase ( p < 0.001, Cohen’s d = 2.31). Structured artifacts reduced unsupported claims to 8.4% (95% CI: 6.7–10.3%) in single-pass generation, a 40% relative reduction versus baseline ( p = 0.02, d = 0.48). The agent workflow achieved 21.1% unsupported claims with the lowest contraindication miss rate (0.04) and highest clinician utility scores. Ablation analysis revealed that citation requirements and constraint checking contributed most to safety improvements. Conclusions Contrary to prevailing assumptions, RAG increases rather than decreases hallucination in clinical text generation. Structured representation with explicit provenance offers a more effective approach to grounding LLM outputs in verifiable patient data. We propose an information-theoretic framework explaining why representation quality determines the ceiling on factual reliability, while agentic verification affects uncertainty handling and safety constraint enforcement.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.644 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.550 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 8.061 Zit.
BioBERT: a pre-trained biomedical language representation model for biomedical text mining
2019 · 6.850 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.781 Zit.