Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
The Need for Prospective Integrity Standards for the Use of Generative AI in Research
2
Zitationen
1
Autoren
2025
Jahr
Abstract
The federal government has a long history of trying to find the right balance in supporting scientific and medical research while protecting the public and other researchers from potential harms. To date, this balance has been generally calibrated differently across contexts - including in clinical care, human subjects research, and research integrity. New challenges continue to face this disparate model of regulation, including novel Generative Artificial Intelligence (GenAI) tools. Because of potential increases in unintentional fabrication, falsification, and plagiarism using GenAI - and challenges establishing both these errors and intentionality in retrospect - this article argues that we should instead move toward a system that sets accepted community standards for the use of GenAI in research as prospective requirements.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.493 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.377 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.835 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.781 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.555 Zit.