Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Exploring variation in research priorities generated by AI tools
0
Zitationen
3
Autoren
2026
Jahr
Abstract
Background: Artificial intelligence (AI) tools based on large language models (LLMs) are being increasingly used by researchers and may play a role in health-related research priority-setting exercises (RPSEs). However, little is known about how these tools may differ in the types of research priorities they generate. Methods: We examined research priorities aimed at improving treatments for four diseases: cancer, COVID-19, HIV, and Alzheimer. We compared the outputs from five AI tools (DeepSeek, ChatGPT, Claude, Perplexity, and Gemini) using SBERT-BioBERT embeddings and cosine similarity scores, and assessed the stability of differences between them by re-running identical prompts and slightly modified versions. Results: We found that the outputs produced by Gemini were highly similar to those produced by the other tools. The two most different outputs were those produced by DeepSeek and Perplexity, whereby the former tended to emphasise technical medical issues, while the latter emphasised public health concerns. This substantive distinction between DeepSeek and Perplexity remained stable across repeated and tweaked prompts. Conclusions: Our exploratory analysis suggests that Gemini performs well for researchers who prefer to generate health-related research priorities using a single AI model. For those planning to draw on multiple models, Perplexity and DeepSeek offer complementary perspectives.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.646 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.554 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 8.071 Zit.
BioBERT: a pre-trained biomedical language representation model for biomedical text mining
2019 · 6.851 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.781 Zit.