OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 08.04.2026, 23:32

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Clinical artificial intelligence: adoption has outpaced accountability

2026·0 Zitationen·Annals of Medicine and SurgeryOpen Access
Volltext beim Verlag öffnen

0

Zitationen

3

Autoren

2026

Jahr

Abstract

Dear Editor, Artificial intelligence is rapidly becoming a part of routine healthcare practice. From radiology reports to triage tools and electronic decision-support systems, AI-powered outputs are increasingly influencing how care is delivered. Many view this transition as a significant step toward more accurate and efficient medicine[1]. However, the rapid clinical integration of these technologies has outpaced the development of clear accountability frameworks, leaving important questions about responsibility unresolved. Our work is in line with the TITAN Guidelines on the need for transparency in AI use in healthcare[2]. In daily practice, clinicians are frequently asked to interpret or rely on AI-generated recommendations. However, many of these systems operate as “black boxes,” yielding results without fully explaining how they were produced. When such outputs influence clinical decisions, the physician ultimately remains responsible for the outcome, even when the underlying algorithm cannot be fully interrogated. This has raised concerns that clinicians may become “liability sinks” for artificial intelligence, assuming responsibility for tools they did not design or validate[3]. This creates a fundamental asymmetry in clinical decision-making: algorithmic influence without corresponding algorithmic accountability. Ethical guidelines and governance frameworks for AI in healthcare have indeed been proposed. International organizations and scholars have outlined principles of fairness, transparency, and oversight. However, translating these high-level principles into consistent real-world implementation remains challenging. Reviews of AI integration in healthcare continue to highlight legal ambiguity, institutional barriers, and uncertainty around liability[4]. In many settings, AI tools are incorporated into clinical workflows before accountability structures are clearly defined, creating a gap between proposed governance and real-world practice. Moving forward, accountability must be operationalized rather than remaining a theoretical principle. A clear record of when and how AI influences clinical decisions should become standard practice. High-risk applications should require active human oversight rather than passive acceptance of algorithmic outputs. Furthermore, transparent reporting mechanisms for AI-related adverse events would allow healthcare systems to learn from errors and strengthen oversight. The solution is not to slow down innovation but to incorporate it more responsibly. Artificial intelligence has the potential to significantly improve patient care. However, for this potential to be realized sustainably, clinicians and patients must be able to trust not only the technology but also the systems that govern its use. Ensuring that accountability evolves alongside innovation is essential to safeguarding both patient safety and clinician trust.

Ähnliche Arbeiten

Autoren

Institutionen

Themen

Artificial Intelligence in Healthcare and EducationGenomics and Rare DiseasesElectronic Health Records Systems
Volltext beim Verlag öffnen