Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Clinical Knowledge and Reasoning Abilities of AI Large Language Models in Anesthesiology: A Comparative Study on the ABA Exam
21
Zitationen
4
Autoren
2023
Jahr
Abstract
Over the past decade, Artificial Intelligence (AI) has expanded significantly with increased adoption across various industries, including medicine. Recently, AI's large language models such as GPT-3, Bard, and GPT-4 have demonstrated remarkable language capabilities. While previous studies have explored their potential in general medical knowledge tasks, here we assess their clinical knowledge and reasoning abilities in a specialized medical context. We study and compare their performances on both the written and oral portions of the comprehensive and challenging American Board of Anesthesiology (ABA) exam, which evaluates candidates' knowledge and competence in anesthesia practice. In addition, we invited two board examiners to evaluate AI's answers without disclosing to them the origin of those responses. Our results reveal that only GPT-4 successfully passed the written exam, achieving an accuracy of 78% on the basic section and 80% on the advanced section. In comparison, the less recent or smaller GPT-3 and Bard models scored 58% and 47% on the basic exam, and 50% and 46% on the advanced exam, respectively. Consequently, only GPT-4 was evaluated in the oral exam, with examiners concluding that it had a high likelihood of passing the actual ABA exam. Additionally, we observe that these models exhibit varying degrees of proficiency across distinct topics, which could serve as an indicator of the relative quality of information contained in the corresponding training datasets. This may also act as a predictor for determining which anesthesiology subspecialty is most likely to witness the earliest integration with AI.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.456 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.332 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.779 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.781 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.533 Zit.