Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
1010: STUDENT CRITIQUE OF AI-GENERATED RESPONSES TO A DRUG INFORMATION INQUIRY
0
Zitationen
2
Autoren
2026
Jahr
Abstract
Introduction: Faculty designed a drug information (DI) assignment to incorporate generative artificial intelligence (AI). This evaluation aimed to assess the students’ critique of an initial AI-generated output to a DI question. Methods: Third-year pharmacy students (P3s) enrolled in the Critical Care Pharmacy Elective critically analyzed AI-generated outputs to a DI prompt. Students were asked to evaluate the AI output by rating their agreement across 5 areas using a 4-point Likert scale (Strongly Agree to Strongly Disagree). Critiques were collected via a Qualtrics survey, with multiple submissions allowed if more than one AI tool was used. Additionally, students were given space to describe their overall critique. After students completed the critique, they were responsible for revising and improving upon the initial AI-generated content to develop a final DI response. Survey results were summarized using descriptive statistics. Results: Twenty-nine P3 students were enrolled in the course, submitting 46 AI output critiques. The most commonly used AI tool was Microsoft Copilot (29/46, 62%), followed by OpenEvidence (10/46, 21%). Most students strongly or somewhat agreed that the recommendations were comprehensive (42/46, 91%) and specific (41/46, 89%). Thirty-seven students (37/46, 80%) felt efficacy and safety were adequately addressed, and 74% (34/46) strongly or somewhat agreed that the data were consistent and reliable, yet 15% (7/46) strongly disagreed with this statement. Many strongly agreed that the information was relevant to the DI question (36/46, 78%). Overall, most students were somewhat satisfied with the AI output (27/46, 59%), followed by very satisfied (15/46, 33%), with three students (3/46, 7%) reporting they were somewhat dissatisfied. Students indicating dissatisfaction with the AI response cited inconsistencies and insufficient detail in the output and, therefore, had concerns about utilizing the AI-generated information for patient care decisions. Conclusions: The student pharmacists found AI-generated responses to DI questions to be generally comprehensive, specific, and relevant, with moderate to high satisfaction levels with the output. Some concerns remain about data consistency and reliability, highlighting the need for the student to intervene to improve the response.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.513 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.407 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.882 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.781 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.571 Zit.