Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Artificial integrity: social performativity of honesty in the age of generative AI
0
Zitationen
1
Autoren
2026
Jahr
Abstract
This article analyzes the paradoxical phenomenon in which students extensively utilize generative AI for academic work while sincerely maintaining that their submissions are honest and original. Beyond simple confusion or concealment, it introduces artificial integrity: a techno-ethical dilemma arising from technologically scaffolded knowing self-deception. Drawing from dramaturgical analysis, narrative identity theory, and recent empirical research, a framework is developed that reveals how integrity is socially performed and stabilized within ambiguous institutional ecologies. The analysis demonstrates that students, while retaining awareness of AI’s core intellectual labor, sustain credible honesty claims through epistemic layering, manifesting in strategic disclosure, resistance to transparency, and persistent anxiety. This condition is co-produced by institutional designs that prioritize polished outputs over visible process, creating a rationalization space where traditional legal-ethical frameworks for authorship and accountability break down. Rather than policing AI use, this article argues institutions must develop clear, legally sound AI-use policies and redesign assessment to mandate transparency, through methods such as process portfolios, reflective annotations, and structured disclosure protocols, thereby resetting the academic stage to reward visible cognition over performative authorship.
Ähnliche Arbeiten
The global landscape of AI ethics guidelines
2019 · 4.726 Zit.
The Limitations of Deep Learning in Adversarial Settings
2016 · 3.886 Zit.
Trust in Automation: Designing for Appropriate Reliance
2004 · 3.513 Zit.
Fairness through awareness
2012 · 3.302 Zit.
AI4People—An Ethical Framework for a Good AI Society: Opportunities, Risks, Principles, and Recommendations
2018 · 3.203 Zit.