OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 11.04.2026, 23:03

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Establishing evidence-driven AI risk governance systems to prevent opaque decision-making in Critical Public Services across Global Jurisdictions

2022·0 Zitationen·International Journal of Computing and Artificial IntelligenceOpen Access
Volltext beim Verlag öffnen

0

Zitationen

1

Autoren

2022

Jahr

Abstract

The rapid deployment of artificial intelligence systems within critical public services has reshaped decision-making across healthcare, social protection, law enforcement, taxation, and infrastructure management worldwide. While AI-driven tools promise efficiency, consistency, and scalability, their opacity has generated substantial governance risks, including algorithmic bias, unexplainable outcomes, accountability gaps, and declining public trust. Regulatory responses remain fragmented across jurisdictions, often reactive and insufficiently aligned with the technical characteristics of complex machine learning models. As a result, there is a growing global demand for evidence-driven AI risk governance systems that can ensure transparency, fairness, and institutional accountability while supporting innovation in public-sector operations. This study proposes a comprehensive governance paradigm that integrates empirical risk assessment, technical auditability, and enforceable policy mechanisms to address opaque AI-enabled decision-making. From a broad perspective, the framework synthesizes international regulatory principles such as proportionality, human oversight, and procedural fairness with operational governance practices including model documentation, data provenance management, performance benchmarking, and lifecycle monitoring. Central to the approach is continuous evidence generation through algorithmic impact assessments, independent audits, and post-deployment evaluation, enabling adaptive governance rather than static compliance. Focusing specifically on critical public services, the paper demonstrates how evidence-driven governance can be operationalized through standardized risk classification, cross-institutional oversight structures, and clearly defined accountability pathways. Particular emphasis is placed on high-stakes applications that affect fundamental rights, where opaque automation may amplify systemic inequities and policy failures. By aligning technical verification processes with legal and ethical mandates, the proposed framework supports consistent implementation across global, regional, and national contexts. Overall, the study contributes a scalable governance model that mitigates AI risk while reinforcing democratic legitimacy, transparency, and public confidence in automated state decision-making.

Ähnliche Arbeiten

Autoren

Institutionen

Themen

Artificial Intelligence in Healthcare and EducationEthics and Social Impacts of AIAdversarial Robustness in Machine Learning
Volltext beim Verlag öffnen