Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Comparing the right to an explanation of judicial AI by function; studies on the EU, Brazil, and China
0
Zitationen
4
Autoren
2026
Jahr
Abstract
Abstract 1 1 The authors would like to thank Gabriel Fonseca and the anonymous peer reviewers for generously reading and commenting on earlier versions of this article. The article also benefitted from valuable feedback provided by colleagues at the Institute for Information Law (IViR) and the RPA Human(e)AI at the University of Amsterdam. The authors wish to also thank the Cultuurfonds, and in particular the Kalshoven/Hopman fund for providing financial support to enable Ljubiša Metikoš to visit and collaborate with Clara Iglesias Keller on this paper at the Weizenbaum institute in 2024. Courts across the world are increasingly adopting Artificial Intelligence (AI) to automate various tasks. But the opacity of judicial AI systems can hinder the ability of litigants to contest vital pieces of evidence and legal observations. One proposed remedy for the inscrutability of judicial AI has been the right to an explanation. This paper provides a comparative analysis of the scope and contents of a right to an explanation of judicial AI in the European Union (EU), Brazil, and China; three jurisdictions with distinct legal traditions and institutional architectures. We argue that such a right needs to take into account that judicial AI can perform widely different functions. We provide a classification of these functions, ranging from ancillary to impactful tasks. We subsequently compare, by function, how judicial AI would need to be explained under due process standards, Data Protection Law, and AI regulation in the EU, Brazil, and China. We find that due process standards provide a broad normative basis for a derived right to an explanation. However, these standards do not sufficiently clarify the scope and content of such a right. Data Protection Law and AI regulations contain more explicitly formulated rights to an explanation that also apply to certain judicial AI systems. Nevertheless, they often exclude impactful functions of judicial AI from their scope. Within these laws there is also a lack of guidance as to what explainability substantively entails. Ultimately, this patchwork of legal frameworks suggests that the protection of litigant contestation is still incomplete, requiring further legislative and scholarly efforts to substantiate the right to an explanation in the administration of justice.
Ähnliche Arbeiten
The global landscape of AI ethics guidelines
2019 · 4.626 Zit.
The Limitations of Deep Learning in Adversarial Settings
2016 · 3.876 Zit.
Trust in Automation: Designing for Appropriate Reliance
2004 · 3.443 Zit.
Fairness through awareness
2012 · 3.294 Zit.
Mind over Machine: The Power of Human Intuition and Expertise in the Era of the Computer
1987 · 3.184 Zit.