Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Artificial intelligence-simplified information to advance reproductive genetic literacy and health equity
5
Zitationen
10
Autoren
2025
Jahr
Abstract
STUDY QUESTION: Can artificial intelligence (AI) and large language models (LLMs) effectively simplify patient education materials (PEMs) to advance reproductive genetic literacy and health equity? SUMMARY ANSWER: LLMs offer a promising approach to support healthcare professionals in generating effective, and simplified PEMs. WHAT IS KNOWN ALREADY: Reproductive genetic testing and counseling holds the potential to support a personalized approach to reduce the burden of genetic disorders. However, its uptake remains limited due to the complexity of the tests and the way that PEMs have been designed. This is more prominent in reproductive genetic testing, as vulnerability of patients may lead to over- or under-use of genetic testing technologies. STUDY DESIGN, SIZE, DURATION: We carried out a comparative observational study to evaluate the capacity of four AI/LLMs to simplify PEMs (n = 30) in reproductive genetics and assessing the clinical accuracy of simplified versions (n = 120) by experts (n = 30). Additionally, we devised a graphical user interface (GUI) to support real-time text simplification and readability analysis. PARTICIPANTS/MATERIALS, SETTING, METHODS: We collected 30 PEMs covering six topics in reproductive genetics from well-recognized platforms, such as WHO, MedlinePlus, and Johns Hopkins. Each PEM was processed by four AI/LLMs (GPT-3.5, GPT-4, Copilot, Gemini) using a fixed prompt, resulting in 120 simplified outputs. We measured readability improvements using five validated metrics, such as simple measure of gobbledygook, each capturing distinct textual characteristics such as sentence length and word complexity. To evaluate clinical reliability of the simplified outputs, a panel of experts (n = 30) in reproductive genetics independently scored each text (3 per text). MAIN RESULTS AND THE ROLE OF CHANCE: All four LLMs significantly improved the readability of the PEMs (P-values <0.001), reducing text complexity to an average 6th-7th grade reading level. While Gemini and Copilot achieved the highest improvement in readability scores, GPT-4 received the highest expert rating across all criteria-accuracy (4.1 ± 0.9), completeness (4.2 ± 0.8), and relevance of omissions (4.0 ± 0.9; P < 10-8). These findings highlight the importance of balancing readability with content integrity to support informed decision-making, as excessive simplification may compromise essential medical information. We devised an open-access GUI that provides real-time PEM simplification and readability analysis to support the integration of AI-assisted approaches in clinical practice (https://huggingface.co/spaces/CellularGenomicMedicine/HealthLiteracyEvaluator). LIMITATIONS, REASONS FOR CAUTION: Careful evaluation of LLM-simplified PEMs is required to ensure that simplification does not lead to omission of critical information. In addition, in this study, we report only the readability improvements of AI-generated texts and expert evaluations. To truly assess the potential of these tools in advancing reproductive genetic literacy and promoting health equity, real-world patient feedback is essential. WIDER IMPLICATIONS OF THE FINDINGS: Integrating AI/LLM into patient education strategies may advance health equity by improving understanding and facilitating informed decision-making. Thereby, more effective engagement of patients in reproductive genetic testing programs by assisting them with well-informed decision-making. STUDY FUNDING/COMPETING INTEREST(S): The EVA specialty program (KP111513) of MUMC+, the Horizon-Europe (NESTOR-101120075), the Estonian Research Council (PRG1076), the Horizon-2020 innovation (ERIN-EU952516) grants of the European Commission, the Swedish Research Council (grant no. 2024-02530), and the Novo Nordisk Foundation (grant no. NNF24OC0092384). The authors declare no conflict of interest relevant to this study. TRIAL REGISTRATION NUMBER: N/A.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.663 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.576 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 8.091 Zit.
BioBERT: a pre-trained biomedical language representation model for biomedical text mining
2019 · 6.859 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.781 Zit.