Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Building a Trustworthy Explainable AI in Healthcare
4
Zitationen
2
Autoren
2020
Jahr
Abstract
The lack of clarity on how the most advanced AI algorithms do what they do creates serious concerns as to the accountability, trust and social acceptability of AI technologies. These concerns become even bigger when people’s well being is at stake, such as healthcare. This calls for systems enabling to make decisions transparent, understandable and explainable for users. This paper briefly discusses the trust in AI healthcare system, propose a framework relation between trust and characteristics of explanation, and possible future studies to build trustworthy Explainable AI.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.485 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.371 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.827 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.781 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.549 Zit.