Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
‘JUST A COOL THING’? EXPLORING THE DISCURSIVE CONSTRUCTION OF POLARISATION IN EARLY PUBLIC STATEMENTS ON CHAT GPT
0
Zitationen
1
Autoren
2025
Jahr
Abstract
In the last two years, we have been witnessing a rapid development of Large Language Models. These are increasingly advanced systems of artificial intelligence which, by use of human language, can perform a large number of tasks and generate human-like responses to any question, with an unprecedented potential for application. At the same time, they are raising significant ethical concerns related to authorship, misinformation, or data privacy, as well as fairness and representation in language use. The language model which irreversibly popularised these systems is ChatGPT, released in late 2022 and gradually adopted on a large scale in the first half of 2023. It can assist with cognitive tasks, such as translations or generation of textual, visual or audio content on any topic, providing instant access to all human knowledge. Since its introduction and up to the present, as humanity is still struggling with overcoming resistance to the paradigm change brought about by digital innovation, the topic has sparked an increasingly polarised debate about the social consequences it may entail, largely dividing society into ‘techno-optimists’ and ‘techno-pessimists’. This article proposes a discourse analysis of the early phases of the global conversation on ChatGPT highlighting the roots of the polarised views on this topic. The data for analysis consist of a selection of public statements made by leading public figures Sam Altman and Noam Chomsky in 2023 and early 2024, reflecting perspectives on the state of affairs in the period following the program's launch. The analysis aims to expose how this polarisation is constructed at the level of discourse, in view of outlining a series of features of the LLM, that underlie the way in which it is perceived by society at large. Methodologically, data interpretation follows a three-dimensional framework, starting from (1) a componential analysis of speech acts, drawing on traditional speech act theories with data-required particularisations, (2) a lexical-semantic analysis to examine how local meanings of the selected words build into larger sense systems, also employing (3) Critical Discourse Analysis (CDA), to examine the representation of agency, further seeking to provide a reflective basis for an optimal engagement with AI-LLMs.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.402 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.270 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.702 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.781 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.507 Zit.