Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
The limits of explainability & human oversight in the EU Commission’s proposal for the Regulation on AI- a critical approach focusing on medical diagnostic systems
16
Zitationen
1
Autoren
2022
Jahr
Abstract
The EU Commission’s proposal for the Regulation on Artificial Intelligence, whilst providing important specifications on the importance of transparency of high-risk systems, falls short in providing a nuanced picture of how technical safeguards in Articles 13 and 14 in the proposal should be translated to AI systems operating on the ground. This paper focusing on medical diagnostic systems offers a perspective on how transparency safeguards should be applied in practice, considering the role of post hoc explainability and Uncertainty Estimates in medical imaging. Medical diagnostic systems offer probabilistic judgements regarding disease classification tasks, having an impact on the interactive experience between the doctor and the patient. Accordingly, we need additional guidance regarding Articles 13 and 14 in the proposal, considering the role of shared decision-making, and patient autonomy in healthcare and to ensure that technical safeguards secure medical diagnostic systems that are a safe, reliable, and trustworthy.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.402 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.270 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.702 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.781 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.507 Zit.