Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Integration of artificial intelligence into ENT practice: a comparative study of real time clinical and operative scenarios
0
Zitationen
4
Autoren
2026
Jahr
Abstract
Abstract Background and aims Artificial intelligence (AI)-based tools, particularly large language models such as chat generative pretrained transformer (ChatGPT), have demonstrated growing potential in supporting clinical reasoning and medical decision making. Otorhinolaryngology practice requires an integrated approach involving diagnostic accuracy, procedural expertise and ethical judgment, often under time-sensitive clinical conditions. This study aimed to evaluate the performance of ChatGPT in comparison with ENT consultant doctors when responding to structured clinical scenarios encompassing diagnostic, management, and procedural decision making across outpatient, emergency, operative, endoscopy, and ethical domains. Methods A comparative analytical study was conducted at a tertiary teaching hospital to assess the performance of ENT consultants and an artificial intelligence model, ChatGPT. Fifty simulated ENT clinical scenarios were developed by senior faculty, representing outpatient, emergency, intraoperative, endoscopy, and ethics domains. Responses were independently generated by ENT consultants and ChatGPT. All responses were anonymized and evaluated by two independent assessors using a predefined objective scoring assessing accuracy, comprehensiveness, clinical reasoning, and ethical appropriateness. Descriptive statistics including mean, median, and standard deviation were calculated. Box and whisker plots were used to assess score distribution and variability. Intergroup comparisons were performed using appropriate statistical tests, with statistical significance defined at a p value of less than 0.05. Results ENT consultants demonstrated consistently higher mean performance scores across all five clinical domains when compared with ChatGPT. The highest consultant scores were observed in endoscopy and intraoperative domains. Outpatient and emergency domains also favored consultant performance, reflecting contextual clinical judgment in variable patient presentations. ChatGPT demonstrated moderate performance across all domains with relatively better scores in ethics scenarios. Greater variability was observed in AI-generated responses, particularly in outpatient and emergency settings. In endoscopy and intraoperative domains, ChatGPT recorded the lowest scores, suggesting limitations in operative and procedural reasoning. Mean score comparisons showed statistically significant differences between consultants and ChatGPT across all domains except ethics, where the difference was not statistically significant. Conclusions Artificial intelligence-based chatbots can provide structured support in ENT diagnostic reasoning and management planning when appropriately prompted. However, human expertise remains essential for clinical judgment, ethical reasoning, and decision-making in dynamic and uncertain clinical environments. AI should be viewed as a complementary tool rather than a substitute for consultant-level decision making in otorhinolaryngology practice.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.418 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.288 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.726 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.781 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.516 Zit.