Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Coming to Terms with the Black Box Problem: How to Justify AI Systems in Health Care
26
Zitationen
1
Autoren
2021
Jahr
Abstract
The use of opaque, uninterpretable artificial intelligence systems in health care can be medically beneficial, but it is often viewed as potentially morally problematic on account of this opacity-because the systems are black boxes. Alex John London has recently argued that opacity is not generally problematic, given that many standard therapies are explanatorily opaque and that we can rely on statistical validation of the systems in deciding whether to implement them. But is statistical validation sufficient to justify implementation of these AI systems in health care, or is it merely one of the necessary criteria? I argue that accountability, which holds an important role in preserving the patient-physician trust that allows the institution of medicine to function, contributes further to an account of AI system justification. Hence, I endorse the vanishing accountability principle: accountability in medicine, in addition to statistical validation, must be preserved. AI systems that introduce problematic gaps in accountability should not be implemented.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.469 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.358 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.803 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.781 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.542 Zit.