Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Artificial Intelligence-Based Clinical Decision Support Systems in Primary Care: A Systematic Review of Clinical Implementation (Preprint)
0
Zitationen
8
Autoren
2026
Jahr
Abstract
<sec> <title>BACKGROUND</title> Artificial intelligence-based clinical decision support systems (AI-CDSS) have the potential to help primary care physicians provide higher quality care with rising case complexity and resource constraints. However, evidence on their effectiveness and safety in real-world primary care practice is lacking. </sec> <sec> <title>OBJECTIVE</title> To synthesize evidence on the real-world effectiveness and safety of AI-CDSS compared with usual care or non–AI-based systems in primary care settings. </sec> <sec> <title>METHODS</title> We conducted a systematic review following the Cochrane Handbook for Systematic Reviews of Interventions and reported the results according to PRISMA 2020. An information specialist designed search strategies and systematically reviewed Medline, Embase, CINAHL, Web of Science, and CENTRAL. Reviewers independently performed the selection and extraction processes. Data extraction was informed by the DECIDE-AI and CONSORT-AI reporting guidelines, and the APPRAISE-AI tool. We independently assessed risk of bias using RoB-2 or ROBINS-I. Due to heterogeneity, we conducted a narrative synthesis across clinician-, patient-, and system-level outcomes as well as safety information. </sec> <sec> <title>RESULTS</title> We identified 4085 records and selected 14 records on 11 distinct studies. We identified 11 AI-CDSS across four experimental (high risk of bias) and seven quasi-experimental studies (serious to critical risk of bias). All studies were conducted in high-income countries. Follow-up ranged from one to 12 months. Machine-learning and deep-learning approaches were the most common (n=8/11). The interventions fulfilled different decision-support functions, most frequently treatment or order facilitation (n=8/11). Clinician-level outcomes were most frequently reported (n=8/11), followed by system-level (n=6/11) and patient-level outcomes (n=5/11). Clinician-level findings were mixed, with several studies reporting reductions in administrative or alert burden. System-level outcomes were heterogeneous and sparsely reported, with isolated studies suggesting improvements in cost-effectiveness, service utilization, patient engagement, program completion, and care delivery efficiency. Patient-level effects were limited or inconsistent, with improvements mainly observed in well-defined conditions with established diagnostic and treatment pathways. Safety evaluation was rare (n=2/11) and limited to technical malfunctions or self-reported side effects. </sec> <sec> <title>CONCLUSIONS</title> AI-CDSS implemented in primary care demonstrate limited and inconsistent effectiveness at clinician-, patient-, and system-level. Safety evaluation relied mainly on self-reported side effects and technical malfunctions, without structured monitoring for AI-related risks. These findings highlight a gap between the pace of innovation and the readiness of healthcare systems for reliable clinical use. Closing this gap will require workflow-integrated system design, timely and standardized evaluation, and active safety monitoring to support trustworthy implementation in primary care. </sec>
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.422 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.300 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.734 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.781 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.519 Zit.