Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Deploying medical AI in low-resource settings: a scoping review of challenges and strategies
0
Zitationen
6
Autoren
2026
Jahr
Abstract
Background Artificial intelligence (AI) is increasingly used to enhance diagnostic accuracy, clinical decision-making, and health system efficiency. However, its sustainable and equitable deployment in low-resource settings (LRS) remains limited. In many low- and middle-income countries (LMICs), digital health efforts are still held back by weak infrastructure, fragmented health data, limited local skills, and gaps in governance. Bringing together lessons from existing evidence and practical, real-world solutions is essential for supporting digital health approaches that are fair, workable, and sustainable over time. Methods Following the PRISMA-ScR framework, a scoping review was conducted of peer-reviewed literature published between January 2015 and January 2026. Searches were performed across PubMed, Scopus, Web of Science, IEEE Xplore, and Google Scholar. Eligible studies examined medical AI deployment, implementation barriers, or enabling strategies within LMIC healthcare settings. Data were extracted and analyzed thematically across four domains: digital infrastructure and connectivity, data quality and local capacity, ethics and governance, and policy and sustainability, guided by a human-centered implementation perspective and JBI methodological guidance. Results A total of 44 studies met the inclusion criteria. The analysis showed that making AI work in low-resource settings is less about advanced technology and more about having the right systems in place. Common problems included unreliable electricity and internet access, messy or incomplete data, limited familiarity with AI among healthcare workers, and a lack of clear rules to guide its use. Reported enabling strategies focused on investments in resilient digital infrastructure, adoption of interoperable data standards (e.g., HL7/FHIR), continuous capacity-building programs, fairness and bias auditing mechanisms, and integration of AI governance within national digital health and e-health policies supported by sustainable financing models. Conclusions Sustainable and equitable deployment of medical AI in LMICs requires embedding human-centered values—transparency, accountability, privacy, and equity throughout the AI lifecycle. Aligned with the WHO (2021) and UNESCO (2021) AI ethics frameworks, this review underscores that meaningful innovation in digital health depends on augmenting, rather than replacing, human judgment through context-aware and trustworthy AI systems. However, this scoping review is limited by the inclusion of English-language studies and by the heterogeneity of studies, which precluded quantitative synthesis.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.490 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.376 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.832 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.781 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.553 Zit.