Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Spectres of medical AI: uncertainty, trust and the posthuman condition
0
Zitationen
1
Autoren
2026
Jahr
Abstract
Artificial intelligence (AI) is transforming clinical practice while simultaneously raising concerns about trust. Drawing on complexity theory, this paper argues that the crisis of trust in medical AI is rooted in multiple forms of uncertainty, including non-causal statistical relations, system-level complexity and the irreducibility of clinical judgement. It introduces a 'U-map' (Uncertainty Map), a conceptual tool that links specific forms of uncertainty to role-appropriate clinical uses such as screening, triage or deliberation aid. Using this map, the paper calibrates model claims against distinct clinical epistemic roles and develops a multidimensional account of trust that spans technological reliability, institutional governance and cultural-emotional orientations. On this basis, the paper sketches a posthuman model of care in which human-machine collaboration and distributed accountability offer a more adequate response to the normative and epistemic challenges posed by medical AI.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.539 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.426 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.921 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.781 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.586 Zit.