OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 13.04.2026, 23:39

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Revealing the AI’s Reasoning in Human-in-the-Loop Systems: How Explanations Impact Human Feedback

2025·0 Zitationen·Journal of the Association for Information Systems
Volltext beim Verlag öffnen

0

Zitationen

4

Autoren

2025

Jahr

Abstract

Integrating human feedback into Artificial Intelligence (AI) through Human-in-the-Loop (HITL) systems can leverage the complementary strengths of humans and AI. Rationale feedback that addresses the AI’s reasoning, receives growing attention. Explainable AI (XAI) provides methods to automatically generate explanations alongside AI predictions, which reveal the AI’s reasoning for users and thus offer potential support to provide rationale feedback. Our study investigates the impact of such explanations on how humans provide feedback to AI. We conducted a randomized online experiment in which participants provided feedback to an AI model for the task of image classification, in response to AI predictions (control group) or AI predictions with explanations (treatment group). Our results show that explanations increase user engagement, influence the content of feedback in that it more closely resembles the AI’s reasoning, and evoke confidence-driven variations to the extent to which rationale feedback resembles the AI’s reasoning.

Ähnliche Arbeiten

Autoren

Themen

Explainable Artificial Intelligence (XAI)Ethics and Social Impacts of AIArtificial Intelligence in Healthcare and Education
Volltext beim Verlag öffnen