OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 13.04.2026, 00:44

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Uncovering Sentiments Expressed on GPT in Academic Papers

2023·0 Zitationen
Volltext beim Verlag öffnen

0

Zitationen

5

Autoren

2023

Jahr

Abstract

The growing hype surrounding GPT models has generated both fear and excitement, with concerns about job replacement and admiration for its capabilities. This study investigated the prevailing sentiments within the academic field. The paper aimed to conduct a more comprehensive and objective analysis focusing on academia. We conducted sentiment analysis on a corpus of both peer-reviewed and preprinted article abstracts published between January 2022 and March 2023 to determine the early prevailing sentiments toward GPT models. We collected and processed 400+academic papers on GPT models, extracting the abstracts and keywords to gain insights into authors' perspectives. The study focused on identifying these articles' positive, negative, and neutral sentiments. The study considered various approaches, including RoBERTa, and traditional Machine Learning models such as Naïve Bayes, Random Forest, and Support Vector Machine, to analyze the collected data and compare performance results. The results demonstrated that the predominant sentiments expressed in scholarly paper abstracts toward GPT models are neutral (60.2% of the sample), instead of polarized. This observation holds even when the confidence score of the model output is limited to $\gt0.5$. The significance of this study lies in its novelty, as limited articles have examined these sentiments. Understanding the various sentiments expressed in scholarly discourse on GPT models can contribute to further research on the ethical implications of generative AI.

Ähnliche Arbeiten

Autoren

Institutionen

Themen

Topic ModelingArtificial Intelligence in Healthcare and EducationExplainable Artificial Intelligence (XAI)
Volltext beim Verlag öffnen