Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
The 2025 Landscape of Generative AI in Scholarly Writing and Publishing: A Scoping Review of Uses and Ethical Approaches
1
Zitationen
2
Autoren
2025
Jahr
Abstract
Introduction: The rapid advancement of generative artificial intelligence (GenAI) has outpaced earlier reviews of its role in scholarly writing. Scholarship is shifting from problem-framing to explicitly normative work emphasising transparency, accountability, and sustained human oversight, yet the operationalisation of ethical guidance in editorial and authorial practice remains insufficiently systematised. Purpose: This scoping review maps 2025 evidence on AI applications in academic publishing and identifies emerging normative frameworks that enable workflow efficiencies while preserving human intellectual ownership and accountability. Method: Using the Arksey and O’Malley framework and PRISMA-ScR reporting, we systematically searched Scopus for English-language articles and reviews published in 2025. Eligibility criteria were defined via the PCC framework. Included publications were charted and analysed thematically to capture use cases, governance responses, and ethical concerns. Results: The search identified 334 records, with 56 publications meeting the inclusion criteria. The corpus shows global authorship and, after manual verification, an approximately balanced mix of reviews and primary studies, revealing substantial document-type misclassification in the database. Discourse clusters around governance (authorship and policy), technological impact (content quality), and risk mitigation (academic integrity). Prominent use cases include support for intellectual tasks (ideation, outlining, and synthesis), language enhancement, and support in peer review and editorial workflows; each catalyses distinct ethical challenges. In response, structured normative frameworks, such as tiered disclosure models and task-based AI taxonomies (e.g., GAIDeT), are emerging to make accountability auditable while preserving human oversight. Across the sample, AI is positioned as an assistive tool subordinate to human responsibility; immediate ethical regulation dominates, whereas educational integration and broader cultural critique remain secondary. We outline a research agenda focused on framework validation, improved detection infrastructures, longitudinal cognitive outcomes, human–AI collaboration design, policy standardisation, and decolonial analyses of algorithmic bias. Conclusion: The field is moving from problem identification toward solution-oriented governance. Progress now depends on interdisciplinary efforts that translate normative principles into workable publishing procedures, ensuring GenAI strengthens, rather than undermines, academic integrity and equitable knowledge production.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.674 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.583 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 8.105 Zit.
BioBERT: a pre-trained biomedical language representation model for biomedical text mining
2019 · 6.862 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.781 Zit.