Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Establishing evidence-driven AI risk governance systems to prevent opaque decision-making in Critical Public Services across Global Jurisdictions
0
Zitationen
1
Autoren
2022
Jahr
Abstract
The rapid deployment of artificial intelligence systems within critical public services has reshaped decision-making across healthcare, social protection, law enforcement, taxation, and infrastructure management worldwide. While AI-driven tools promise efficiency, consistency, and scalability, their opacity has generated substantial governance risks, including algorithmic bias, unexplainable outcomes, accountability gaps, and declining public trust. Regulatory responses remain fragmented across jurisdictions, often reactive and insufficiently aligned with the technical characteristics of complex machine learning models. As a result, there is a growing global demand for evidence-driven AI risk governance systems that can ensure transparency, fairness, and institutional accountability while supporting innovation in public-sector operations. This study proposes a comprehensive governance paradigm that integrates empirical risk assessment, technical auditability, and enforceable policy mechanisms to address opaque AI-enabled decision-making. From a broad perspective, the framework synthesizes international regulatory principles such as proportionality, human oversight, and procedural fairness with operational governance practices including model documentation, data provenance management, performance benchmarking, and lifecycle monitoring. Central to the approach is continuous evidence generation through algorithmic impact assessments, independent audits, and post-deployment evaluation, enabling adaptive governance rather than static compliance. Focusing specifically on critical public services, the paper demonstrates how evidence-driven governance can be operationalized through standardized risk classification, cross-institutional oversight structures, and clearly defined accountability pathways. Particular emphasis is placed on high-stakes applications that affect fundamental rights, where opaque automation may amplify systemic inequities and policy failures. By aligning technical verification processes with legal and ethical mandates, the proposed framework supports consistent implementation across global, regional, and national contexts. Overall, the study contributes a scalable governance model that mitigates AI risk while reinforcing democratic legitimacy, transparency, and public confidence in automated state decision-making.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.422 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.300 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.734 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.781 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.519 Zit.