Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
ChatGPT-4 as an Assistant for Evidence-Based Decision-Making Among General Dentists: An Observational Feasibility Study
2
Zitationen
2
Autoren
2025
Jahr
Abstract
Background Evidence-based decision-making (EBDM) is essential in contemporary dentistry. However, navigating the extensive and constantly evolving scientific literature can be challenging. Large language models (LLMs), such as ChatGPT-4, have the potential to transform EBDM by analyzing vast datasets and extracting critical information, thereby significantly reducing the time required to find evidence. This observational feasibility study investigates ChatGPT-4's potential in dental EBDM, focusing on its capabilities, strengths, and limitations. Materials and methods In this observational feasibility study, two independent examiners conducted interactive sessions with ChatGPT-4. Five clinical scenarios were explored using the Google Chrome web browser, accessing publicly available scientific evidence from Cochrane, ADA, and PubMed. This approach ensured compliance with the Cochrane guidelines for EBDM. Two independent dentists engaged with ChatGPT-4 in simulated real-life clinical scenarios to seek scientific information. The output from ChatGPT-4 for each scenario was assessed based on predetermined criteria. Its responses were evaluated for accuracy, relevance, efficiency, actionability, and ethical considerations using the ChatGPT-4 Response Scoring System (CRSS) and the ChatGPT-4 Generative Ability Matrix (C-GAM). Results ChatGPT-4 demonstrated consistent performance across all five clinical scenarios, achieving a C-GAM score of 46.4% and a CRSS score of 12 out of 28. It effectively identified relevant sources of evidence and provided concise summaries, potentially saving valuable time and enhancing access to information. No significant differences in scores were found when the responses to all clinical scenarios were analyzed independently by the two researchers. However, a notable limitation was its inability to provide specific web links directing users to relevant scientific articles. Additionally, while ChatGPT-4 offered suggestions for incorporating the latest scientific publications into decision-making, it could not generate direct links to these articles. Conclusion Despite its current limitations, ChatGPT-4, as a generative AI, can assist clinicians in making evidence-based decisions. It can save time compared to conventional search engines. Ethical considerations must be prioritized in training these models to ensure that clinicians make responsible, evidence-based decisions rather than relying solely on specific evidence statements provided by ChatGPT-4. This model shows its potential as an AI tool for EBDM in dentistry. Further development and training could address existing limitations and enhance its effectiveness; however, clinicians must retain ultimate responsibility for informed decisions, necessitating expertise and critical evaluation of the evidence presented.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.439 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.315 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.756 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.781 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.526 Zit.