Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Abstract 2745: Patients prefer ChatGPT to physician responses in cancer communication.
0
Zitationen
17
Autoren
2026
Jahr
Abstract
Abstract Introduction Large language models like ChatGPT (GPT) are increasingly utilized by patients and physicians yet it is unknown whether either group prefers the output of GPT compared to physician-authored content. This study evaluated how patients and physicians perceive and compare ChatGPT-generated versus physician-authored recommendations. Methods We surveyed 51 adult female breast cancer patients and 15 physicians at the University of New Mexico Comprehensive Cancer Center to compare their evaluations of GPT versus physician-authored responses to four cancer scenarios related to treatment, family dynamics and employment. Each scenario included two blinded, responses to the same question—one from GPT and one from a physician. Participants rated each on 7-point Likert scales for helpfulness, empathy, and informativeness and indicated their preferred response (1 = strongly agree, 7 = strongly disagree). The primary outcome was the proportion preferring GPT (binomial test vs 50%); secondary outcomes were within-subject Likert differences (Δ = GPT - physician) analyzed with Wilcoxon signed-rank tests. Results Among 66 participants (51 patients; 15 physicians), patients preferred GPT 71/104 (68.3%, p = 0.00025), while physicians preferred GPT 31/52 (59.6%, p = 0.21), with no significant difference between groups (p = 0.82). Preferences varied by scenario: patients showed no significant differences in S1 or S2 (52.6% and 58.6% preferring GPT; both p > 0.45), strongly favored GPT in S3 (85.7%, p = 0.00018), and significantly favored GPT in S4 (71.4%, p = 0.036). Physician preference for GPT was near equipoise for all scenarios (36.4-75.0%, all p ≥ 0.15). Patient ratings favored GPT for helpfulness (Δ = −0.36, p = 0.009), empathy (Δ = −0.46, p = 0.009), and informativeness (Δ = −0.42, p = 0.013). Physician ratings showed no significant differences for helpfulness (Δ = −0.10, p = 0.59) or informativeness (Δ = +0.18, p = 0.57), with a trend toward higher empathy for GPT (Δ = −0.44, p = 0.058). Conclusion Patients showed a strong overall preference for GPT-generated recommendations, with scenario-specific differences. Physicians demonstrated a smaller, nonsignificant preference for GPT. While ratings were numerically close for all domains, patients consistently judged GPT responses as more helpful, empathetic, and informative, whereas physicians rated the two sources similarly. These findings suggest that physicians could combine GPT-generated responses preferred by patients with their own recommendations to improve clinical communication. Citation Format: Aaron Segura, Bernard Tawfik, Ben Liem, Zoneddy Dayao, David Y. Lee, Jacklyn M. Nemunaitis, David Savage, Moises Harari-Turquie, Nicole Hill, Charles Foucar, Amy Tarnower, Dulcinea Quintana, Jude Khatib, Thomas Schroeder, Martha Mapalo, Yolanda Sanchez, Ramesh Gopal, . Patients prefer ChatGPT to physician responses in cancer communication [abstract]. In: Proceedings of the American Association for Cancer Research Annual Meeting 2026; Part 1 (Regular Abstracts); 2026 Apr 17-22; San Diego, CA. Philadelphia (PA): AACR; Cancer Res 2026;86(7 Suppl):Abstract nr 2745.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.549 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.443 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.941 Zit.
BioBERT: a pre-trained biomedical language representation model for biomedical text mining
2019 · 6.792 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.781 Zit.