Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Auditing and Monitoring Artificial Intelligence Systems in Healthcare: A Multilayer Framework for Bias Detection, Explainability, and Regulatory Compliance
0
Zitationen
6
Autoren
2026
Jahr
Abstract
Artificial intelligence (AI) is increasingly embedded in clinical decision-making, yet most oversight approaches remain limited to pre-deployment validation or isolated technical evaluation. This gap creates risks related to bias, safety, accountability, and regulatory compliance once systems operate in real clinical environments. This article presents a normative, lifecycle-oriented auditing and monitoring framework for healthcare AI derived from a structured synthesis of literature on trustworthy AI, clinical risk management, and governance practice. The framework integrates four operational layers: (1) bias detection and fairness assessment; (2) explainability and model transparency; (3) performance, safety, and drift monitoring; and (4) regulatory and ethical compliance. Unlike prior models that treat technical validation and governance oversight separately, the proposed approach links continuous monitoring outputs to institutional decision authorities through predefined escalation pathways and role-based responsibilities across developers, clinicians, and governance bodies. The framework is designed for practical use by healthcare institutions, regulators, and AI developers. It provides guidance on monitoring frequency, the prioritization of fairness metrics based on clinical risk, the evaluation of clinically meaningful explanations, and adaptation across regulatory environments. By operationalizing auditing as an ongoing governance process rather than a one-time certification activity, the model supports the accountable and trustworthy deployment of AI systems throughout their real-world lifecycle. This work offers a structured foundation for aligning technical monitoring with clinical governance and regulatory expectations in healthcare AI implementation.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.400 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.261 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.695 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.781 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.506 Zit.