Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Ein externer Link zum Volltext ist derzeit nicht verfügbar.
A servant of two masters: How Academic Fears about Artificial Intelligence map to Employer Engagement
0
Zitationen
2
Autoren
2024
Jahr
Abstract
When OpenAI made their Large Language Model (LLM) based ChatGPT platform available to the public in November 2022, it was as a demonstration version, rather than a full release. This did not stop an explosion of usage, and in January 2023 the app had “590 million visits from 100 million unique visitors.”1 This level of growth, which has been described as “unprecedented”2 “remarkable”3 “exponential”4 and “phenomenal,”5 took the wider world by surprise. Little wonder then that educational establishments across levels and across geographic boundaries have spent much of 2023 scrambling either to apply existing Learning and Teaching strategies to GenAI or to develop specific new strategies. What is also apparent with platforms such as ChatGPT is that the scope and ability of the free-to-access versions is evolving rapidly. The abilities of ChatGPT in February 2023 will be eclipsed entirely by the abilities of Chat GPT in February 2024. This speed of evolution is causing ongoing problems at universities, as there is a recognition that any rules and approaches must be future proof. The integration of technology is occurring at an incomprehensibly rapid pace and the need for professionals to adjust and learn with an open mind to embrace and integrate change is vital to transform what we do with students and employers. A second potential disrupter is the lawsuit issued by the New York Times in January 2024 against OpenAI and Microsoft alleging copyright infringement by the data scraping software used by ChatGPT and Bard.6 This case is contested by OpenAI, and whatever the outcome may be, it will doubtless have an impact. This article took a focused approach on discussions on the use of GenAI in the healthcare and legal professions and the university programmes developing graduates for these professions. The risk posed by academia responding inappropriately or too slowly is that graduates will not be prepared for the industries in which they hope to work. We identify key themes and key implications for business and academia, and consider the application of the BATTEL model,7 developed between 2019 and 2021, as a mechanism for steering the appropriate use of AI. We conclude that, if handled properly, AI is far from being an existential threat to the University sector and represents a unique opportunity for the creation of a new paradigm of education.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.436 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.311 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.753 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.781 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.523 Zit.