Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Impact of Generative AI on Author’s Metrics and Copyright Ownership: Digital Labour, Ethical Attribution, and Traceability Frameworks for Future Internet Systems
0
Zitationen
7
Autoren
2026
Jahr
Abstract
The integration of generative artificial intelligence (GAI) into digital learning environments is a profound socio-technical transformation. While GAI promises enhanced accessibility and efficiency, it simultaneously obscures the human creativity and intellectual labour that underpins digital knowledge production. This opacity limits creators’ visibility into how their work is used, evaluated, and monetised. This review application work investigates how several leading large language models, including ChatGPT (GPT-4o), Gemini (1.5 Flash), and DeepSeek (V3), interact with a creative platform hosting over 300 original essays, poems, and artworks from various human creatives. Our review reveals that despite clear evidence of models engaging with original materials, standard platform analytics of the average creative record no attribution, referrals, or traceable interaction from their end, rendering creators’ labour invisible. This compels critical examination of knowledge provenance and power within AI-mediated education. To address this, we propose a socio-technical framework, Chujoyi-TraceNet, not as a technical fix, but a mechanism to re-centre ethics, justice, and recognition in digital governance. By integrating real-time tracking, blockchain-enabled licensing, and metadata watermarking, Chujoyi-TraceNet operationalises the principles of equitable attribution. This study argues for a re-imagining of digital ecosystems in education, one that links the technical act of attribution to broader debates on digital labour, platform ethics, and the pursuit of social justice, thereby contributing to more democratic and accountable learning media in the era of Industry 4.0 and 5.0.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.418 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.288 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.726 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.781 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.516 Zit.