Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Governance of High-Risk AI Systems in Healthcare and Credit Scoring
7
Zitationen
12
Autoren
2025
Jahr
Abstract
MotivationGovernance of artificial intelligence (AI) systems is an evolving field focused on establishing the frameworks, policies, and practices necessary to ensure the responsible use of AI systems across various sectors (Ma ntyma ki et al. 2022).The need for AI governance stems from growing awareness of the significant challenges these technologies pose.Research and real-world examples highlight that AI systems can lead to unintended consequences, including biases, discrimination, inaccuracies, and opaque decision-making processes (Schneider et al. 2023).This raises questions in terms of algorithmic accountability, as it is necessary to determine who holds responsibility for algorithmic actions when systems operate in a harmful or unethical manner (Horneber and Laumer 2023).Algorithmic accountability involves identifying the parties liable for design flaws, operational errors, or misuse of AI systems, while also ensuring that mechanisms are in place to correct biases, address inaccuracies, and enforce transparency in decision-making.Defining these accountabilities is an important task in a governance framework for AI as many parties in the organization will be involved in setting up and running AI systems (e.g., data provision, decision on data sources, defining policies).Furthermore, these and other examples also show that AI systems can have far-reaching consequences for work, labor processes, employees (in terms of, e.g., technology acceptance, productivity, autonomy, identity) and organizations (e.g., structures, culture, leadership) as a whole.They therefore always (should) raise questions about the humane shape of work and organization in the context of digital governance (e.g.,
Ähnliche Arbeiten
The global landscape of AI ethics guidelines
2019 · 4.687 Zit.
The Limitations of Deep Learning in Adversarial Settings
2016 · 3.879 Zit.
Trust in Automation: Designing for Appropriate Reliance
2004 · 3.498 Zit.
Fairness through awareness
2012 · 3.299 Zit.
Mind over Machine: The Power of Human Intuition and Expertise in the Era of the Computer
1987 · 3.184 Zit.
Autoren
Institutionen
- Technische Universität Darmstadt(DE)
- Philipps University of Marburg(DE)
- King's College London(GB)
- University of Stuttgart(DE)
- Stuttgart Technical University of Applied Sciences(DE)
- Hessische Hochschule für Polizei und Verwaltung(DE)
- Queensland University of Technology(AU)
- Victoria University of Wellington(NZ)