Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Bias Mitigation in Machine Learning Models Trained on Structured Health Data: A Systematic Review
0
Zitationen
5
Autoren
2026
Jahr
Abstract
Machine learning (ML) models for disease diagnostics and prognostics have been widely used in medicine, leveraging real-world clinical data and benefiting both clinicians and patients. However, the usage of those models still faces significant challenges, such as the systematic bias in predictions across different demographic groups, which can exacerbate disparities against minorities. This Systematic Literature Review (SLR) examines computational methods for mitigating bias in ML models trained on structured data for disease diagnosis and prognosis. Following PRISMA guidelines, we systematically extracted information from primary studies. We defined three research questions focused on the protected attributes for group splitting in bias evaluation, the fairness metrics applied to measure bias, and the bias mitigation strategies applied, which we categorized into the stages of the ML pipeline: pre-processing, in-processing, and post-processing. Out of the 2,064 studies retrieved, 26 studies were included in the review. The most commonly protected attribute was race/ethnicity, followed by gender/sex and age. The most frequently used fairness metrics were demographic parity, equalized odds, AUROC disparity, and equal opportunity difference. Among mitigation techniques, the most common were the pre-processing method reweighting, the in-processing method regularization incorporating fairness constraints, along the post-processing method of threshold adjustment. The findings indicate that most bias mitigation methods effectively reduce group discrepancies with minimal impact on accuracy. This paper provides insights for developing fair ML models in healthcare, highlights existing gaps in the field, and enhances healthcare professionals’ understanding of bias mitigation in ML.
Ähnliche Arbeiten
"Why Should I Trust You?"
2016 · 14.396 Zit.
A Comprehensive Survey on Graph Neural Networks
2020 · 8.729 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.270 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.702 Zit.
Artificial intelligence in healthcare: past, present and future
2017 · 4.437 Zit.