Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
From Evidence to Recommendations With Large Language Models: A Feasibility Study
0
Zitationen
9
Autoren
2025
Jahr
Abstract
BACKGROUND: Formulating evidene-based recommendations for practice guidelines is a complex process that requires substantial expertise. Artificial intelligence (AI) is promising in accelerating the guideline development process. This study evaluates the feasibility of leveraging five large language models (LLMs)-ChatGPT-3.5, Claude-3 sonnet, Bard, ChatGLM-4, Kimi chat-to generate recommendations based on structured evidence, assesses their concordance, and explores the potential for AI. METHODS: The general and specific prompts were drafted and validated. We searched PubMed to include evidence-based guidelines related to health and lifestyle. We randomly selected one recommendation from every included guideline as the sample and extracted the evidence base supporting the selected recommendations. The prompts and evidence were fed into five LLMs to generate structured recommendations. RESULTS: ChatGPT-3.5 demonstrated the highest proficiency in comprehensively extracting and synthesizing evidence to formulate novel insights. Bard consistently adhered to existing guideline principles, aligning its algorithm with these tenets. Claude generated fewer topical recommendations, focusing instead on evidence analysis and mitigating irrelevant information. ChatGLM-4 exhibited a balanced approach, combining evidence extraction with adherence to guideline principles. Kimi showed potential in generating concise and targeted recommendations. Among the six generated recommendations, average consistency ranged from 50% to 91.7%. CONCLUSION: The findings of this study suggest that LLMs hold immense potential in accelerating the formulation of evidence-based recommendations. LLMs can rapidly and comprehensively extract and synthesize relevant information from structured evidence, generating recommendations that align with the available evidence.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.646 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.554 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 8.071 Zit.
BioBERT: a pre-trained biomedical language representation model for biomedical text mining
2019 · 6.851 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.781 Zit.