OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 24.04.2026, 13:39

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

1010: STUDENT CRITIQUE OF AI-GENERATED RESPONSES TO A DRUG INFORMATION INQUIRY

2026·0 Zitationen·Critical Care Medicine
Volltext beim Verlag öffnen

0

Zitationen

2

Autoren

2026

Jahr

Abstract

Introduction: Faculty designed a drug information (DI) assignment to incorporate generative artificial intelligence (AI). This evaluation aimed to assess the students’ critique of an initial AI-generated output to a DI question. Methods: Third-year pharmacy students (P3s) enrolled in the Critical Care Pharmacy Elective critically analyzed AI-generated outputs to a DI prompt. Students were asked to evaluate the AI output by rating their agreement across 5 areas using a 4-point Likert scale (Strongly Agree to Strongly Disagree). Critiques were collected via a Qualtrics survey, with multiple submissions allowed if more than one AI tool was used. Additionally, students were given space to describe their overall critique. After students completed the critique, they were responsible for revising and improving upon the initial AI-generated content to develop a final DI response. Survey results were summarized using descriptive statistics. Results: Twenty-nine P3 students were enrolled in the course, submitting 46 AI output critiques. The most commonly used AI tool was Microsoft Copilot (29/46, 62%), followed by OpenEvidence (10/46, 21%). Most students strongly or somewhat agreed that the recommendations were comprehensive (42/46, 91%) and specific (41/46, 89%). Thirty-seven students (37/46, 80%) felt efficacy and safety were adequately addressed, and 74% (34/46) strongly or somewhat agreed that the data were consistent and reliable, yet 15% (7/46) strongly disagreed with this statement. Many strongly agreed that the information was relevant to the DI question (36/46, 78%). Overall, most students were somewhat satisfied with the AI output (27/46, 59%), followed by very satisfied (15/46, 33%), with three students (3/46, 7%) reporting they were somewhat dissatisfied. Students indicating dissatisfaction with the AI response cited inconsistencies and insufficient detail in the output and, therefore, had concerns about utilizing the AI-generated information for patient care decisions. Conclusions: The student pharmacists found AI-generated responses to DI questions to be generally comprehensive, specific, and relevant, with moderate to high satisfaction levels with the output. Some concerns remain about data consistency and reliability, highlighting the need for the student to intervene to improve the response.

Ähnliche Arbeiten

Autoren

Institutionen

Themen

Artificial Intelligence in Healthcare and EducationSimulation-Based Education in HealthcareElectronic Health Records Systems
Volltext beim Verlag öffnen