Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Integrating Machine Learning Standards in Disseminating Machine Learning Research
0
Zitationen
7
Autoren
2025
Jahr
Abstract
The increasing use of AI-based approaches such as machine learning (ML) across diverse scientific fields presents challenges for reproducibly disseminating and assessing research. As ML becomes integral to a growing range of computationally intensive applications (e.g. clinical research), there is a critical need for transparent reporting methods to ensure both comprehensibility and the reproducibility of the supporting studies. There are a growing number of standards, checklists and guidelines enabling more standardized reporting of ML research, but the proliferation and complexity of these make them challenging to use. Particularly in assessment and peer review, which has to date, been an ad hoc process that has struggled to throw light on increasingly complicated computational supporting methods that are otherwise unintelligible to other researchers. Taking the publication process beyond these black boxes, GigaScience Press has experimented with integrating many of these ML-standards into the publication process. Having a broad-scope that necessitated looking at more generalist and automated approaches. Here, we map the current landscape of artificial intelligence (AI) standards, and outline our adoption of the DOME recommendations for Machine Learning in biology. We developed a publishing workflow that integrates the DOME Data Stewardship Wizard and DOME Registry tools into the peer-review and publication process. From this case study we provide journal authors, reviewers and Editors examples of approaches, workflows and strategies to more logically disseminate and review ML research. Demonstrating the need for continued dialogue and collaboration among various ML communities to create unified, comprehensive standards, to enhance the credibility, sustainability and impact of ML-based scientific research.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.493 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.377 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.835 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.781 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.555 Zit.