Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Revealing the AI’s Reasoning in Human-in-the-Loop Systems: How Explanations Impact Human Feedback
0
Zitationen
4
Autoren
2025
Jahr
Abstract
Integrating human feedback into Artificial Intelligence (AI) through Human-in-the-Loop (HITL) systems can leverage the complementary strengths of humans and AI. Rationale feedback that addresses the AI’s reasoning, receives growing attention. Explainable AI (XAI) provides methods to automatically generate explanations alongside AI predictions, which reveal the AI’s reasoning for users and thus offer potential support to provide rationale feedback. Our study investigates the impact of such explanations on how humans provide feedback to AI. We conducted a randomized online experiment in which participants provided feedback to an AI model for the task of image classification, in response to AI predictions (control group) or AI predictions with explanations (treatment group). Our results show that explanations increase user engagement, influence the content of feedback in that it more closely resembles the AI’s reasoning, and evoke confidence-driven variations to the extent to which rationale feedback resembles the AI’s reasoning.
Ähnliche Arbeiten
Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization
2017 · 20.627 Zit.
Generative Adversarial Nets
2023 · 19.894 Zit.
Visualizing and Understanding Convolutional Networks
2014 · 15.308 Zit.
"Why Should I Trust You?"
2016 · 14.455 Zit.
On a Method to Measure Supervised Multiclass Model’s Interpretability: Application to Degradation Diagnosis (Short Paper)
2024 · 13.177 Zit.