Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Mediating Community-AI Interaction through Situated Explanation
40
Zitationen
2
Autoren
2020
Jahr
Abstract
Artificial intelligence (AI) has become prevalent in our everyday technologies and impacts both individuals and communities. The explainable AI (XAI) scholarship has explored the philosophical nature of explanation and technical explanations, which are usually driven by experts in lab settings and can be challenging for laypersons to understand. In addition, existing XAI research tends to focus on the individual level. Little is known about how people understand and explain AI-led decisions in the community context. Drawing from XAI and activity theory, a foundational HCI theory, we theorize how explanation is situated in a community's shared values, norms, knowledge, and practices, and how situated explanation mediates community-AI interaction. We then present a case study of AI-led moderation, where community members collectively develop explanations of AI-led decisions, most of which are automated punishments. Lastly, we discuss the implications of this framework at the intersection of CSCW, HCI, and XAI.
Ähnliche Arbeiten
The global landscape of AI ethics guidelines
2019 · 4.660 Zit.
The Limitations of Deep Learning in Adversarial Settings
2016 · 3.879 Zit.
Trust in Automation: Designing for Appropriate Reliance
2004 · 3.485 Zit.
Fairness through awareness
2012 · 3.296 Zit.
Mind over Machine: The Power of Human Intuition and Expertise in the Era of the Computer
1987 · 3.184 Zit.