Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Reporting guidelines in medical artificial intelligence: a systematic review and meta-analysis
125
Zitationen
5
Autoren
2024
Jahr
Abstract
BACKGROUND: The field of Artificial Intelligence (AI) holds transformative potential in medicine. However, the lack of universal reporting guidelines poses challenges in ensuring the validity and reproducibility of published research studies in this field. METHODS: Based on a systematic review of academic publications and reporting standards demanded by both international consortia and regulatory stakeholders as well as leading journals in the fields of medicine and medical informatics, 26 reporting guidelines published between 2009 and 2023 were included in this analysis. Guidelines were stratified by breadth (general or specific to medical fields), underlying consensus quality, and target research phase (preclinical, translational, clinical) and subsequently analyzed regarding the overlap and variations in guideline items. RESULTS: AI reporting guidelines for medical research vary with respect to the quality of the underlying consensus process, breadth, and target research phase. Some guideline items such as reporting of study design and model performance recur across guidelines, whereas other items are specific to particular fields and research stages. CONCLUSIONS: Our analysis highlights the importance of reporting guidelines in clinical AI research and underscores the need for common standards that address the identified variations and gaps in current guidelines. Overall, this comprehensive overview could help researchers and public stakeholders reinforce quality standards for increased reliability, reproducibility, clinical validity, and public trust in AI research in healthcare. This could facilitate the safe, effective, and ethical translation of AI methods into clinical applications that will ultimately improve patient outcomes.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.663 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.576 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 8.091 Zit.
BioBERT: a pre-trained biomedical language representation model for biomedical text mining
2019 · 6.859 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.781 Zit.
Autoren
Institutionen
- Indiana University Health(US)
- Purdue University West Lafayette(US)
- Fresenius (Germany)(DE)
- University Hospital Carl Gustav Carus(DE)
- Indiana University Indianapolis(US)
- Indiana University School of Medicine
- Indiana University – Purdue University Indianapolis(US)
- Technische Universität Dresden(DE)
- Universitätsklinikum Aachen(DE)
- Heidelberg University(DE)
- University Hospital Heidelberg(DE)
- National Center for Tumor Diseases(DE)
- RWTH Aachen University(DE)