OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 09.04.2026, 22:28

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Comparison of the Success of Chatgpt 4.0 and Google Gemini in Anatomy Questions Asked in Türkiye National Medical Specialization Exams

2025·0 Zitationen·Tıp Eğitimi DünyasıOpen Access
Volltext beim Verlag öffnen

0

Zitationen

2

Autoren

2025

Jahr

Abstract

Objective: The scientific validity of utilizing artificial intelligence (AI)-based tools for studying anatomy and preparing for medical specialization exams has increasingly become a subject of academic interest. This study aimed to evaluate the performance of ChatGPT 4.0 and Google Gemini in answering anatomy questions from the Türkiye National Medical Specialization Examination. Materials and Methods: Anatomy-related questions were extracted from exams administered biannually between 2006 and 2021, which were publicly available through the institutional website. Out of 400 questions, 384 were deemed suitable and were simultaneously posed to both AI models. Results: The overall accuracy was 80.7% for ChatGPT 4.0 and 69.3% for Gemini (p < 0.001). ChatGPT 4.0 demonstrated a significantly higher success rate in questions requiring clinical reasoning and inference (91.1%) compared to Gemini (71.4%) (p = 0.007). Conclusion: ChatGPT 4.0 outperformed Gemini in terms of accuracy and reliability, particularly for clinically oriented anatomy questions. While AI models such as ChatGPT show promise in anatomy education and exam preparation, it is advisable to use them in conjunction with validated academic resources.

Ähnliche Arbeiten

Autoren

Institutionen

Themen

Artificial Intelligence in Healthcare and EducationRadiomics and Machine Learning in Medical ImagingAI in cancer detection
Volltext beim Verlag öffnen