Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Readability and Information Quality in Cancer Information From a Free vs Paid Chatbot
25
Zitationen
9
Autoren
2024
Jahr
Abstract
Importance: The mainstream use of chatbots requires a thorough investigation of their readability and quality of information. Objective: To identify readability and quality differences in information between a free and paywalled chatbot cancer-related responses, and to explore if more precise prompting can mitigate any observed differences. Design, Setting, and Participants: This cross-sectional study compared readability and information quality of a chatbot's free vs paywalled responses with Google Trends' top 5 search queries associated with breast, lung, prostate, colorectal, and skin cancers from January 1, 2021, to January 1, 2023. Data were extracted from the search tracker, and responses were produced by free and paywalled ChatGPT. Data were analyzed from December 20, 2023, to January 15, 2024. Exposures: Free vs paywalled chatbot outputs with and without prompt: "Explain the following at a sixth grade reading level: [nonprompted input]." Main Outcomes and Measures: The primary outcome measured the readability of a chatbot's responses using Flesch Reading Ease scores (0 [graduate reading level] to 100 [easy fifth grade reading level]). Secondary outcomes included assessing consumer health information quality with the validated DISCERN instrument (overall score from 1 [low quality] to 5 [high quality]) for each response. Scores were compared between the 2 chatbot models with and without prompting. Results: This study evaluated 100 chatbot responses. Nonprompted free chatbot responses had lower readability (median [IQR] Flesh Reading ease scores, 52.60 [44.54-61.46]) than nonprompted paywalled chatbot responses (62.48 [54.83-68.40]) (P < .05). However, prompting the free chatbot to reword responses at a sixth grade reading level was associated with increased reading ease scores than the paywalled chatbot nonprompted responses (median [IQR], 71.55 [68.20-78.99]) (P < .001). Prompting was associated with increases in reading ease in both free (median [IQR], 71.55 [68.20-78.99]; P < .001)and paywalled versions (median [IQR], 75.64 [70.53-81.12]; P < .001). There was no significant difference in overall DISCERN scores between the chatbot models, with and without prompting. Conclusions and Relevance: In this cross-sectional study, paying for the chatbot was found to provide easier-to-read responses, but prompting the free version of the chatbot was associated with increased response readability without changing information quality. Educating the public on how to prompt chatbots may help promote equitable access to health information.
Ähnliche Arbeiten
The content validity index: Are you sure you know what's being reported? critique and recommendations
2006 · 6.238 Zit.
Improving the Quality of Web Surveys: The Checklist for Reporting Results of Internet E-Surveys (CHERRIES)
2004 · 6.229 Zit.
Health literacy and public health: A systematic review and integration of definitions and models
2012 · 5.915 Zit.
Low Health Literacy and Health Outcomes: An Updated Systematic Review
2011 · 5.295 Zit.
Health literacy as a public health goal: a challenge for contemporary health education and communication strategies into the 21st century
2000 · 4.996 Zit.