Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Hallucinations in ChatGPT: An Unreliable Tool for Learning
32
Zitationen
3
Autoren
2023
Jahr
Abstract
Recently, ChatGPT has been upgraded to its newer version for its unsubscribed users – ChatGPT 3.5. Though ChatGPT has become an astonishing phenomenon all over the world for creating realistic texts within seconds, it can disseminate wrong information and misconceptions. Technical experts have identified this problem as hallucination. This paper has examined ChatGPT’s ability to differentiate between correct and incorrect relations in the questions that are set to it. It has also explored the efficacy of ChatGPT in helping students acquire linguistic and literary proficiency. The study took the form of exploratory interpretive research. The participants of the research study were students studying English at the undergraduate level. Data was collected through semi-structured interviews, FGDs, and input provided to ChatGPT. All data were analyzed qualitatively. The findings of this research indicate that ChatGPT tends to provide inconsistent information when a series of contextual questions are asked. Because of this hallucination, ChatGPT becomes an unreliable source for language and literature learning.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.436 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.311 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.753 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.781 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.523 Zit.