Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Abstract P2-02-24: Evaluating ChatGPT as an educational resource for patients with Breast cancer: A preliminary investigation
0
Zitationen
10
Autoren
2025
Jahr
Abstract
Abstract Introduction: Breast cancer (BC) is the most commonly diagnosed cancer among women in the United States, representing about 30% of new cases annually. Patients with BC encounter various challenges, such as understanding the disease, exploring treatment options, managing prognosis, coping with side effects, and accessing supportive care. Educating patients is crucial for empowering them to make informed healthcare decisions. With the evolution of artificial intelligence (AI), there is potential to leverage these technologies for patient education. ChatGPT, an AI-based language model, is a promising avenue in this regard. While ChatGPT has been applied across diverse domains, its potential in medicine is currently under exploration. This study aims to evaluate ChatGPT's effectiveness as an educational tool for BC patients, assessing its accuracy and safety in delivering medical information.Methods: We designed a comprehensive questionnaire with 22 questions covering various aspects of breast cancer—from diagnosis to treatment options and prognosis. This questionnaire served as the prompt for our study utilizing OpenAI's ChatGPT version 3.5.0 to generate responses. The accuracy of these responses was meticulously evaluated by nine Breast Medical Oncologists (BMOs): four from Roswell Park Comprehensive Cancer Center, four from Ohio State University, and one from the Medical College of Wisconsin. Each expert independently assessed the responses provided by ChatGPT and categorized them as accurate, inaccurate, or harmful.Results: We found that sixteen of these questions (16/22) 73% received unanimous agreement from all BMOs for accuracy. Only one question (1/22) 4% was deemed harmful, and five questions (5/22) 23% graded as inaccurate by some of BMOs due to insufficient or misleading information. The response to questions that received criticism were “Are there any alternative or complementary therapies that may help with breast cancer?” it was flagged harmful by (2/9) 22% BMO and inaccurate by (2/9) 22%. Another question, asking about the different types of breast cancer, was graded inaccurate by (6/9) 67% of experts as it was missing a few breast cancer types. Additionally, (3/9) 33% of BMOs found the response regarding dietary recommendations for breast cancer patients inaccurate, highlighting the lack of evidence supporting dietary interventions in metastatic breast cancer. The advice on preserving hair during chemotherapy was rated as inaccurate by (3/9) 33% of experts due to concerns that some of the recommendations, such as maintaining a good diet and using a mild shampoo, do not effectively prevent chemotherapy-induced alopecia and could potentially raise false hopes among patients. Similarly, explanations about HER2-positive breast cancer and its prognosis were labeled as inaccurate by (2/9) 22% of BMOs due to misleading statements about recurrence rates. Lastly, (3/9) 33% of experts criticized the discussion on potential side effects of treatment modalities for omitting important information, such as cardiovascular toxicity associated with chemotherapy. There are some limitations to our study, such as small sample size, subjective evaluation criteria, and rapid evolution of ChatGPT and other large language models. Conclusion: While ChatGPT shows potential as an educational resource for BC patients with 73% of answers graded accurate by all BMOs, it is essential to recognize its limitations and the indispensable role of human medical expertise. These findings underscore the variability in accuracy and appropriateness of AI-generated responses in medical contexts, highlighting the importance of refining and validating AI tools for patient education and information dissemination. A few of the safety considerations are the use of complex medical terminology in responses and, the dissemination of sensitive information without empathy and emotional support. Citation Format: Zunairah Shah, Arya Mariam Roy, Varsha Gupta, Nerea Lopetegui Lia, Dionisia Quiroga, Gilbert Bader, Sheheryar Kabraji, Lubna N. Chaudhary, Ellis Levine, Shipra Gandhi. Evaluating ChatGPT as an educational resource for patients with Breast cancer: A preliminary investigation [abstract]. In: Proceedings of the San Antonio Breast Cancer Symposium 2024; 2024 Dec 10-13; San Antonio, TX. Philadelphia (PA): AACR; Clin Cancer Res 2025;31(12 Suppl):Abstract nr P2-02-24.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.593 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.483 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 8.003 Zit.
BioBERT: a pre-trained biomedical language representation model for biomedical text mining
2019 · 6.824 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.781 Zit.