Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Clinician Attitudes Towards LLMs in Healthcare: Preferences for Adoption and Oversight (Preprint)
0
Zitationen
8
Autoren
2025
Jahr
Abstract
<sec> <title>BACKGROUND</title> Large language models (LLMs) are rapidly entering health care, but limited empirical data exist on clinician perceptions. Understanding clinician attitudes is essential for responsible implementation as LLMs transition from experimental to routine tools. </sec> <sec> <title>OBJECTIVE</title> To characterize clinician perspectives on LLM use in health care, including exposure, knowledge, perceived clinical utility, safety and bias concerns, and oversight preferences. </sec> <sec> <title>METHODS</title> Design, Setting, and Participants This cross-sectional survey was distributed online through a health care news platform mailing list. A total of 335 health care professionals responded, including attending physicians (68.7%), residents/fellows, nurse practitioners, physician assistants, and researchers. Most were aged 30 to 59 years (72.5%) and practiced in the Northeast (77.9%). Exposures Self-reported use or consideration of LLM tools in clinical practice. Main Outcomes and Measures Outcomes included LLM usage patterns, knowledge levels, perceived applications, safety and bias concerns, and preferences for regulatory oversight. Analyses included descriptive statistics, Wilcoxon rank sum tests, chi-square tests, and Spearman correlations. </sec> <sec> <title>RESULTS</title> Of 335 participants, 62.7% reported current or contemplated LLM use. Users reported significantly higher self-rated knowledge than nonusers (p<.001). Age was not associated with knowledge (r=–0.072; p=.188). Participants identified literature review (73.4%), decision support (57.0%), and patient communication (54.9%) as the most valuable applications. Concerns included decision errors (75.5%) and algorithmic bias (73.1%); nearly all respondents (96.4%) expressed concern about bias, and those who had observed bias reported higher concern levels (p<.001). Participants favored regulation by professional associations (65.4%) over technology companies (29.0%), with 87.8% supporting professional guidelines. Confidence in existing oversight was low, with 66.6% reporting none. </sec> <sec> <title>CONCLUSIONS</title> Clinicians show early adoption of LLMs for lower-risk tasks while expressing concerns about safety, bias, and governance. Respondents preferred professional organizations over industry for oversight. Successful integration of LLMs into health care will require careful planning, human supervision, transparent disclosure, and auditing to maximize benefits and minimize risks. </sec>
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.626 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.532 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 8.046 Zit.
BioBERT: a pre-trained biomedical language representation model for biomedical text mining
2019 · 6.843 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.781 Zit.