Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
When Your Only Tool Is A Hammer
12
Zitationen
4
Autoren
2020
Jahr
Abstract
It is no longer a hypothetical worry that artificial intelligence - more specifically, machine learning (ML) - can propagate the effects of pernicious bias in healthcare. To address these problems, some have proposed the development of 'algorithmic fairness' solutions. The primary goal of these solutions is to constrain the effect of pernicious bias with respect to a given outcome of interest as a function of one's protected identity (i.e., characteristics generally protected by civil or human rights legislation. The technical limitations of these solutions have been well-characterized. Ethically, the problematic implication - of developers, potentially, and end users - is that by virtue of algorithmic fairness solutions a model can be rendered 'objective' (i.e., free from the influence of pernicious bias). The ostensible neutrality of these solutions may unintentionally prompt new consequences for vulnerable groups by obscuring downstream problems due to the persistence of real-world bias.
Ähnliche Arbeiten
The Cochrane Collaboration's tool for assessing risk of bias in randomised trials
2011 · 33.901 Zit.
To Err Is Human
2000 · 14.088 Zit.
Strengthening the reporting of observational studies in epidemiology (STROBE) statement: guidelines for reporting observational studies
2007 · 9.792 Zit.
KDIGO 2024 Clinical Practice Guideline for the Evaluation and Management of Chronic Kidney Disease
2024 · 7.110 Zit.
Dissecting racial bias in an algorithm used to manage the health of populations
2019 · 5.842 Zit.