Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Beyond the Hype: Understanding the Limits, Errors and Risk Areas of Artificial Intelligence in Gastroenterology
0
Zitationen
1
Autoren
2026
Jahr
Abstract
Artificial intelligence (AI) has rapidly expanded across gastroenterology, enabling advances in real-time endoscopic detection, radiologic interpretation, digital pathology, multimodal prognostication, and electronic health record-based decision support. Despite strong performance in controlled studies and increasing regulatory adoption, the clinical integration of AI remains challenged by limitations that threaten reliability, safety, and equitable deployment. This editorial synthesizes the major sources of vulnerability across current AI applications in gastroenterology, including dataset bias, limited generalizability, annotation variability, underrepresentation of rare lesions, and performance degradation in real-world environments. Endoscopic AI systems-the most mature applications-face persistent false positives, false negatives, alert fatigue, and operator deskilling, while most algorithms lack explainability and fail to incorporate essential clinical context. Predictive models based on EHR or imaging data are hindered by data noise, evolving clinical practices, and susceptibility to model drift. Additional risks arise from automation bias, suboptimal workflow integration, and unresolved ethical, regulatory, and liability considerations. Ensuring safe and meaningful clinical deployment requires continuous post-deployment monitoring, rigorous external validation, improved interpretability, and the development of multimodal systems that integrate imaging with clinical and biological data. Equally critical is clinician education to preserve human oversight and prevent overreliance on algorithmic output. By recognizing and addressing these limitations, the field can move beyond accuracy-focused evaluation toward designing AI systems that are robust, transparent, and capable of improving patient-centered outcomes in diverse real-world settings.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.626 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.532 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 8.046 Zit.
BioBERT: a pre-trained biomedical language representation model for biomedical text mining
2019 · 6.843 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.781 Zit.