Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Clinical artificial intelligence: adoption has outpaced accountability
0
Zitationen
3
Autoren
2026
Jahr
Abstract
Dear Editor, Artificial intelligence is rapidly becoming a part of routine healthcare practice. From radiology reports to triage tools and electronic decision-support systems, AI-powered outputs are increasingly influencing how care is delivered. Many view this transition as a significant step toward more accurate and efficient medicine[1]. However, the rapid clinical integration of these technologies has outpaced the development of clear accountability frameworks, leaving important questions about responsibility unresolved. Our work is in line with the TITAN Guidelines on the need for transparency in AI use in healthcare[2]. In daily practice, clinicians are frequently asked to interpret or rely on AI-generated recommendations. However, many of these systems operate as “black boxes,” yielding results without fully explaining how they were produced. When such outputs influence clinical decisions, the physician ultimately remains responsible for the outcome, even when the underlying algorithm cannot be fully interrogated. This has raised concerns that clinicians may become “liability sinks” for artificial intelligence, assuming responsibility for tools they did not design or validate[3]. This creates a fundamental asymmetry in clinical decision-making: algorithmic influence without corresponding algorithmic accountability. Ethical guidelines and governance frameworks for AI in healthcare have indeed been proposed. International organizations and scholars have outlined principles of fairness, transparency, and oversight. However, translating these high-level principles into consistent real-world implementation remains challenging. Reviews of AI integration in healthcare continue to highlight legal ambiguity, institutional barriers, and uncertainty around liability[4]. In many settings, AI tools are incorporated into clinical workflows before accountability structures are clearly defined, creating a gap between proposed governance and real-world practice. Moving forward, accountability must be operationalized rather than remaining a theoretical principle. A clear record of when and how AI influences clinical decisions should become standard practice. High-risk applications should require active human oversight rather than passive acceptance of algorithmic outputs. Furthermore, transparent reporting mechanisms for AI-related adverse events would allow healthcare systems to learn from errors and strengthen oversight. The solution is not to slow down innovation but to incorporate it more responsibly. Artificial intelligence has the potential to significantly improve patient care. However, for this potential to be realized sustainably, clinicians and patients must be able to trust not only the technology but also the systems that govern its use. Ensuring that accountability evolves alongside innovation is essential to safeguarding both patient safety and clinician trust.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.402 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.270 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.702 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.781 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.507 Zit.