Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Concerns of AI use in evidence synthesis based practices: collective views from the community
0
Zitationen
6
Autoren
2026
Jahr
Abstract
The use of artificial intelligence (AI) in research has become one of the most hotly debated topics. This is particularly true for the field of evidence synthesis where automation through AI may lead to substantial time and resource savings. Many researchers see the potential benefits of using AI technologies, yet there is hesitation around embedding AI in practice. We explored the concerns of those working in the field of evidence synthesis through a series of online and in-person events. Data collection was conducted across two in-person and 2 online events: the Evidence Synthesis Hackathon (ESH) 2024, the Community, Opportunities, Research and Experience Information Retrieval (CORE) Forum, a Systematic Review Conversations (SRC) online seminar, and an online Horizon Scanning (HS) Survey. Inductive and deductive coding was utilised to synthesis data into broad themes and subthemes, independently for each event. A vote counting and ranking approach was used to triangulate data across events to capture convergent and divergent themes between participant groups. Across the four events we acquired a total of 248 data points (from 80 respondents) and responses were broadly similar across cohorts. Through synthesis and triangulation, we identified 10 overarching themes. The most prominent themes were knowledge and skills, and data management, respectively. Skills loss, skills gap and job loss were highlighted within the knowledge and skills theme. Bias, confidentiality and reliability were prominent for data management. Lower ranking concerns included environment, economics, AI market and costs. These are valid apprehensions faced by researchers across the field of evidence synthesis and should be considered in the broader discussion of AI. Development of rigorous methodologies and guidance may help to overcome these issues by facilitating responsible and transparent use of AI.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.422 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.300 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.734 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.781 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.519 Zit.