Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
AI in Point-of-Care Imaging for Clinical Decision Support: Systematic Review of Diagnostic Accuracy, Task-Shifting, and Explainability
0
Zitationen
5
Autoren
2026
Jahr
Abstract
BACKGROUND: Artificial intelligence (AI) integrated with point-of-care (POC) imaging has emerged as a promising approach to expand diagnostic access in settings with limited specialist availability. However, no systematic review has comprehensively evaluated AI-assisted clinical decision support across multiple POC imaging modalities, assessed explainability implementation, or quantified clinical impact evidence gaps. OBJECTIVE: To systematically evaluate and synthesize evidence on AI-based clinical decision support systems utilizing point-of-care imaging, with particular attention to task-shifting potential, explainability implementation, and clinical outcome evidence. METHODS: We searched PubMed, Scopus, IEEE Xplore, and Web of Science (January 2018 to November 2025). We included research studies evaluating AI/machine learning systems applied to POC-capable imaging modalities in POC clinical settings with clinical decision support outputs. Two reviewers independently screened studies, extracted data across 15 domains, and assessed methodological quality using QUADAS-2. Proposed frameworks were developed to evaluate explainability implementation and clinical impact evidence. Narrative synthesis was performed due to substantial data heterogeneity. RESULTS: Of 2,113 records identified, 20 studies met inclusion criteria, encompassing approximately 78,296 patients across 15 countries. Studies evaluated tuberculosis (n=5), breast cancer (n=3), deep vein thrombosis (n=2), and nine other conditions using ultrasound (35%, 7/20), chest X-ray (25%, 5/20), photography-based and colposcopic imaging (15%, 3/20), fundus photography (10%, 2/20), microscopy (10%, 2/20), and dermoscopy (5%, 1/20). Median sensitivity was 92% (IQR 85.7%-98.0%), and median specificity was 90.6% (IQR 70.0%-95.7%). Task-shifting was demonstrated in 65% (13/20) of studies, with nonspecialists achieving specialist-level performance after a median of 1 hour of training. The explainable AI (XAI) implementation cascade revealed critical gaps: 75% (15/20) of studies did not mention explainability, 10% (2/20) provided explanations to users, and none evaluated whether clinicians understood explanations or whether XAI influenced decisions. The clinical impact pyramid showed 15% (3/20) of studies reported technical accuracy only, 65% (13/20) reported process outcomes, 20% (4/20) documented clinical actions, and none measured patient outcomes. Methodological quality was concerning, as 70% (14/20) of studies were at high or very high risk of bias, with verification bias (70%, 14/20) and selection bias (50%, 10/20) being the most common. The overall certainty of evidence was very low-Grading of Recommendations, Assessment, Development, and Evaluation (GRADE) ⊕◯◯◯, primarily due to risk of bias, heterogeneity, and imprecision. CONCLUSIONS: AI-assisted POC imaging demonstrates promising diagnostic accuracy and enables meaningful task-shifting with minimal training requirements. However, critical evidence gaps remain, including absent patient outcome measurement, inadequate explainability evaluation, regulatory misalignment, and lack of cross-context validation despite claims of global applicability. Addressing these gaps requires implementation research with patient outcome end points, rigorous XAI evaluation, and multi-context validation before widespread adoption. Limitations include restriction to English-language publications, grey literature exclusion, and heterogeneity precluding meta-analysis. CLINICALTRIAL: This review was not prospectively registered due to time constraints.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.646 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.554 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 8.071 Zit.
BioBERT: a pre-trained biomedical language representation model for biomedical text mining
2019 · 6.851 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.781 Zit.