OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 12.05.2026, 09:52

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Can artificial intelligence read between the lines: Utilizing ChatGPT to evaluate medical students’ implicit attitudes towards doctor–patient relationship

2025·0 Zitationen·Medical Teacher
Volltext beim Verlag öffnen

0

Zitationen

8

Autoren

2025

Jahr

Abstract

PURPOSE: To explore ChatGPT's utility in evaluating medical students' implicit attitudes toward the doctor-patient relationship (DPR). MATERIALS AND METHODS: This study analyzed interview transcripts from 10 medical students, categorizing implicit DPR attitudes into Care and Share dimensions, each with 5 levels. We first assessed ChatGPT's ability to identify DPR-related textual content, then compared grading results from experts, ChatGPT, and participants' self-evaluations. Finally, experts evaluated ChatGPT's performance acceptability. RESULTS: ChatGPT annotated fewer DPR-related segments than human experts. In grading, pre-course scores from experts and ChatGPT were comparable but lower than self-assessments. Post-course, expert scores were lower than ChatGPT's and further below self-assessments. ChatGPT achieved an accuracy of 0.84-0.85, precision of 0.81-0.85, recall of 0.84-0.85, and F1 score of 0.82-0.84 for attitude classification, with an average acceptability score of 3.9/5. CONCLUSIONS: Large language models (LLMs) demonstrated high consistency with human experts in judging implicit attitudes. Future research should optimize LLMs and replicate this framework across diverse contexts with larger samples.

Ähnliche Arbeiten