Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Author response to: Comment on: Will collaborative publishing with ChatGPT drive academic writing in the future?
2
Zitationen
3
Autoren
2023
Jahr
Abstract
The emphasis of Ariyaratne et al. 2 was that ChatGPT can play a role in collaborative research, rather than function as an independent generator of research articles.It is acknowledged that the information deciphered using ChatGPT has the potential to be inaccurate and even generate fictitious references, as previously shown in a study published by Ariyaratne et al. 3 .Hence current artificial intelligence (AI) tools should be used with caution and supervised by humans.It is accepted that efficiency should not be the sole incentive for incorporating AI tools into research, and a focus on quality, integrity, and intellectual contributions is paramount.It is for this reason that the authors strongly feel that human input is still required in the field, even when advanced AI models are utilized, at least until such time in the future when these tools have sufficiently evolved that they might be able to work with minimal human input.AI can be broadly classified into three types, namely artificial narrow intelligence (able to solve a single focused problem), artificial general intelligence (currently a theoretical concept describing a model with intelligence similar to humans), and artificial super intelligence (also currently a theoretical concept describing a system able to surpass human capabilities) 4 .AI language tools, such as ChatGPT, fall under the first category and, as such, when incorporated into a complex field, such as academic research, which requires multiple problems to be addressed, rather that purely generating a research paper, require human assistance and input.Self-learning AI systems may learn from their mistakes and continuously improve without the interference of humans to perform hard coding.This learning can be supervised, unsupervised, or reinforced.Self-learning AI tools can enable real-time analysis to identify and correct errors, hence improving accuracy and efficiency.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.485 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.371 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.827 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.781 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.549 Zit.