Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Endorsement of artificial intelligence guidelines across leading endocrinology journals: a cross-sectional analysis
0
Zitationen
11
Autoren
2026
Jahr
Abstract
Background With expanding applications of artificial intelligence (AI) within the research pipeline of endocrinology, it is essential that journals uphold explicit AI usage policies that maintain the rigor and integrity of published research. In this review, we aim to evaluate current AI policies of leading endocrinology journals to assess the current landscape of research and the implications of its progression. Methods We conducted a cross-sectional review of the top endocrinology journals using the SCImago Journal Ranking (SJR) database. From November 2024 to July 2025, we reviewed AI usage guidelines from publicly available Instructions for Authors, including authorship, manuscript writing, and content/image generation. We also assessed whether journals endorsed AI-specific reporting guidelines (e.g., CONSORT-AI, SPIRIT-AI). Data were extracted independently and in duplicate using a standardized form. Reproducibility was supported through protocol registration on Open Science Framework. Results Of the top 100 endocrinology journals, 84.0% (84/100) mentioned AI in their Instructions for Authors and 79.0% (79/100) required disclosure of AI use during submission. Although no journals (0/100) permitted AI tools for authorship, 64.0% (64/100) allowed its use in manuscript writing, 22.0% (22/100) for content generation, and 50.0% (50/100) for image generation. Despite these guidelines, only one (1.0%; 1/100) journal required a specific reporting guideline, and very few endorsed AI statements by the IMCJE (9/100), COPE (12/100), or WAME (0/100). No statistically significant correlations were identified between AI usage policies and SJR or impact factor. Conclusion Many leading endocrinology journals have addressed AI use; however, their policies remain incomprehensive. It is critical that publishers and their journals establish explicit guidelines regarding the use of AI tools to promote transparent, reproducible, and reliable research.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.485 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.371 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.827 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.781 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.549 Zit.