Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Social bias in artificial intelligence algorithms designed to improve cardiovascular risk assessment relative to the Framingham Risk Score: a protocol for a systematic review
3
Zitationen
2
Autoren
2023
Jahr
Abstract
INTRODUCTION: Cardiovascular disease (CVD) prevention relies on timely identification of and intervention for individuals at risk. Risk assessment models such as the Framingham Risk Score (FRS) have been shown to over-estimate or under-estimate risk in certain groups, such as socioeconomically disadvantaged populations. Artificial intelligence (AI) and machine learning (ML) could be used to address such equity gaps to improve risk assessment; however, critical appraisal is warranted before ML-informed clinical decision-making is implemented. METHODS AND ANALYSIS: This study will employ an equity-lens to identify sources of bias (ie, race/ethnicity, gender and social stratum) in ML algorithms designed to improve CVD risk assessment relative to the FRS. A comprehensive literature search will be completed using MEDLINE, Embase and IEEE to answer the research question: do AI algorithms that are designed for the estimation of CVD risk and that compare performance with the FRS address the sources of bias inherent in the FRS? No study date filters will be imposed on the search, but English language filters will be applied. Studies describing a specific algorithm or ML approach that provided a risk assessment output for coronary artery disease, heart failure, cardiac arrhythmias (ie, atrial fibrillation), stroke or a global CVD risk score, and that compared performance with the FRS are eligible for inclusion. Papers describing algorithms for the diagnosis rather than the prevention of CVD will be excluded. A structured narrative review analysis of included studies will be completed. ETHICS AND DISSEMINATION: Ethics approval was not required. Ethics exemption was formally received from the General Research Ethics Board at Queen's University. The completed systematic review will be submitted to a peer-reviewed journal and parts of the work will be presented at relevant conferences.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.646 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.554 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 8.071 Zit.
BioBERT: a pre-trained biomedical language representation model for biomedical text mining
2019 · 6.851 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.781 Zit.