OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 10.04.2026, 22:18

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Compliance of systematic reviews and meta-analyses in ophthalmology with the PRISMA statement: an AI-based assessment and longitudinal comparison with 2017 data

2026·0 Zitationen·BMC Medical Research MethodologyOpen Access
Volltext beim Verlag öffnen

0

Zitationen

4

Autoren

2026

Jahr

Abstract

Systematic reviews and meta-analyses are vital in evidence-based medicine, especially in ophthalmology, where the complexity of paired data can lead to reporting challenges. In 2017, we evaluated the adherence of ophthalmology-related systematic reviews and meta-analyses to the PRISMA (Preferred Reporting Items for Systematic reviews and Meta-Analyses) 2009 statement. This study revisits the issue with a focus on adherence to the updated PRISMA 2020 checklist, compares results with the 2017 study, and explores the potential of AI in evaluating compliance. The aim of this study is to evaluate the reporting quality of systematic reviews and meta-analyses published in major ophthalmology journals between 2020 and 2024, based on the PRISMA 2020 checklist, and to compare human and AI assessments of compliance. A total of 207 systematic reviews and meta-analyses published in 11 major ophthalmology journals were included in this study. Each article was independently assessed for adherence to the 2020 PRISMA checklist, first by two human reviewers, and subsequently by two distinct AI platforms (ChatGPT-4.0 and Gemini Pro 2.5). Compliance scores were calculated, and inter-observer agreement between human and AI evaluations was determined using Cohen’s kappa statistic. The Mann–Whitney U test was employed to compare these findings with those of a 2017 study. The mean compliance score, as assessed by human reviewers, was 36.28 out of 42 points (86.37%), indicating a substantial improvement in adherence to the PRISMA checklist compared with the level reported in the 2017 study (p < 0.00001). Compliance scores generated by the AI platforms demonstrated a moderate level of agreement with human assessments (Cohen’s κ = 0.63 for ChatGPT, 0.54 for Gemini). Strong compliance was observed for background and rationale (items 3 and 4), selection criteria (items 5–10b), and limitations (items 23a–23c). Conversely, lower compliance was noted for risk of bias assessment (item 11), sensitivity analysis (items 13f and 20c), and research registration (items 24a–24c). This study demonstrates a marked improvement in the reporting quality of systematic reviews and meta-analyses in ophthalmology following adoption of the 2020 PRISMA statement. Nonetheless, persistent deficiencies remain, particularly in the reporting of bias, sensitivity analyses, and research registration. The application of AI models offers promising potential for enhancing the efficiency and effectiveness of reporting quality assessments; however, further refinement is required to ensure consistency and accuracy. Future iterations of the PRISMA guidelines should consider explicitly addressing the role of AI in research evaluation.

Ähnliche Arbeiten

Autoren

Institutionen

Themen

Meta-analysis and systematic reviewsRetinal Diseases and TreatmentsArtificial Intelligence in Healthcare and Education
Volltext beim Verlag öffnen