Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Artificial Intelligence and the Future of Human Rights: Legal Accountability for Algorithmic Decision-Making in Democratic Societies.
0
Zitationen
2
Autoren
2026
Jahr
Abstract
The rapid development of artificial intelligence (AI) has significantly transformed decision-making processes across public and private sectors, raising complex legal and ethical challenges for the protection of human rights in democratic societies. Algorithmic decision-making systems are increasingly used in areas such as employment, financial services, healthcare, law enforcement, and public administration, where automated processes may influence individuals’ rights and opportunities. While AI technologies offer significant benefits in terms of efficiency, data analysis, and institutional decision-making, they also present risks related to algorithmic bias, lack of transparency, privacy violations, and weakened procedural accountability. These concerns have prompted growing legal debates regarding how democratic societies can regulate AI systems in ways that ensure accountability and protect fundamental rights. This study examines the intersection between artificial intelligence and human rights by analyzing the legal accountability of algorithmic decision-making systems within democratic governance frameworks. Using a qualitative doctrinal and comparative legal methodology, the research evaluates the implications of AI technologies for core human rights principles, including equality, non-discrimination, privacy, and due process. The study further explores the legal challenges associated with algorithmic bias, opacity in automated decision-making, and the allocation of liability among governments, technology companies, and developers responsible for AI systems. The findings indicate that traditional legal frameworks are often insufficient to address the complex accountability issues created by AI-driven decision-making. Effective governance of algorithmic systems requires the development of new regulatory approaches that emphasize transparency, explainability, and human oversight. The study also highlights the importance of integrating human rights principles into AI governance frameworks and strengthening institutional oversight mechanisms to ensure that algorithmic technologies operate within the rule of law. The research concludes that a rights-based regulatory approach is essential for balancing technological innovation with the protection of fundamental freedoms. By developing clear accountability frameworks, strengthening regulatory institutions, and promoting international cooperation on AI governance, democratic societies can ensure that artificial intelligence technologies support rather than undermine human rights and democratic values in the evolving digital era.
Ähnliche Arbeiten
The global landscape of AI ethics guidelines
2019 · 4.756 Zit.
The Limitations of Deep Learning in Adversarial Settings
2016 · 3.890 Zit.
Trust in Automation: Designing for Appropriate Reliance
2004 · 3.532 Zit.
Fairness through awareness
2012 · 3.304 Zit.
AI4People—An Ethical Framework for a Good AI Society: Opportunities, Risks, Principles, and Recommendations
2018 · 3.229 Zit.