OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 12.04.2026, 04:34

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Do Explanations Improve the Quality of AI-assisted Human Decisions? An Algorithm-in-the-Loop Analysis of Factual & Counterfactual Explanations

2023·1 Zitationen
Volltext beim Verlag öffnen

1

Zitationen

3

Autoren

2023

Jahr

Abstract

The increased use of AI algorithmic aids in high-stakes decision making has prompted interest in explainable AI (xAI), and the role of counterfactual explanations to increase trust in human-algorithm collaborations and to mitigate unfair outcomes. However, research is limited in understanding how explainable AI improves human decision-making. We conduct an online experiment with 559 participants, utilizing an "algorithm-in-the-loop" framework and real-world pre-trial data to investigate how explanations of algorithmic pretrial risk assessments generated from state-of-the-art machine learning explanation methods (counterfactual explanations via DiCE & factual explanations via SHAP) influences the quality of decision-makers' assessment of recidivism. Our results show that counterfactual and factual explanations achieve different desirable goals (separately improve human assessment of model accuracy, fairness, and calibration), yet still fall short of improving the combined accuracy, fairness, and reliability of human predictions - reinstating the need for sociotechnical, empirical evaluations in xAI. We conclude with user feedback on DiCE counterfactual explanations, as well as a discussion of the broader implications of our results to AI-assisted decision-making and xAI.

Ähnliche Arbeiten

Autoren

Institutionen

Themen

Explainable Artificial Intelligence (XAI)Artificial Intelligence in Healthcare and EducationEthics and Social Impacts of AI
Volltext beim Verlag öffnen