OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 08.04.2026, 16:12

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Explainable AI for government: Does the type of explanation matter to the accuracy, fairness, and trustworthiness of an algorithmic decision as perceived by those who are affected?

2024·22 Zitationen·Government Information QuarterlyOpen Access
Volltext beim Verlag öffnen

22

Zitationen

4

Autoren

2024

Jahr

Abstract

Amidst concerns over biased and misguided government decisions arrived at through algorithmic treatment, it is important for members of society to be able to perceive that public authorities are making fair, accurate, and trustworthy decisions. Inspired in part by equity and procedural justice theories and by theories of attitudes towards technologies, we posited that the perception of these attributes of decisions is influenced by the type of explanation offered, which can be input-based, group-based, case-based, or counterfactual. We tested our hypotheses with two studies, each of which involved a pre-registered online survey experiment conducted in December 2022. In both studies, the subjects ( N = 1200) were officers in high positions at stock companies registered in Japan, who were presented with a scenario consisting of an algorithmic decision made by a public authority: a ministry's decision to reject a grant application from their company (Study 1) and a tax authority's decision to select their company for an on-site tax inspection (Study 2). The studies revealed that offering the subjects some type of explanation had a positive effect on their attitude towards a decision, to various extents, although the detailed results of the two studies are not robust. These findings call for a nuanced inquiry, both in research and practice, into how to best design explanations of algorithmic decisions from societal and human-centric perspectives in different decision-making contexts. • Adverse algorithmic decisions imposed by public authorities are discussed. • The perceived fairness, accuracy, and trustworthiness of such decisions are examined. • These attitudes depend on the type of explanation provided by explainable AI. • The effects of the types of explanations on attitudes differ among decision domains. • These effects can be taken into account when designing and offering explanations.

Ähnliche Arbeiten

Autoren

Institutionen

Themen

Explainable Artificial Intelligence (XAI)Ethics and Social Impacts of AIArtificial Intelligence in Healthcare and Education
Volltext beim Verlag öffnen