Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
The perfect technological storm: artificial intelligence and moral complacency
13
Zitationen
1
Autoren
2024
Jahr
Abstract
Abstract Artificially intelligent machines are different in kind from all previous machines and tools. While many are used for relatively benign purposes, the types of artificially intelligent machines that we should care about, the ones that are worth focusing on, are the machines that purport to replace humans entirely and thereby engage in what Brian Cantwell Smith calls “judgment.” As impressive as artificially intelligent machines are, their abilities are still derived from humans and as such lack the sort of normative commitments that humans have. So while artificially intelligent machines possess a great capacity for “reckoning,” to use Smith’s terminology, i.e., a calculative prowess of extraordinary utility and importance, they still lack the kind of considered human judgment that accompanies the ethical commitment and responsible action we humans must ultimately aspire toward. But there is a perfect technological storm brewing. Artificially intelligent machines are analogous to a perfect storm in that such machines involve the convergence of a number of factors that threaten our ability to behave ethically and maintain meaningful human control over the outcomes of processes involving artificial intelligence. I argue that the storm in the context of artificially intelligent machines makes us vulnerable to moral complacency. That is, this perfect technological storm is capable of lulling people into a state in which they abdicate responsibility for decision-making and behaviour precipitated by the use of artificially intelligent machines, a state that I am calling “moral complacency.” I focus on three salient problems that converge to make us especially vulnerable to becoming morally complacent and losing meaningful human control. The first problem is that of transparency/opacity. The second problem is that of overtrust in machines, often referred to as the automation bias. The third problem is that of ascribing responsibility. I examine each of these problems and how together they threaten to render us morally complacent.
Ähnliche Arbeiten
The global landscape of AI ethics guidelines
2019 · 4.654 Zit.
The Limitations of Deep Learning in Adversarial Settings
2016 · 3.878 Zit.
Trust in Automation: Designing for Appropriate Reliance
2004 · 3.481 Zit.
Fairness through awareness
2012 · 3.296 Zit.
Mind over Machine: The Power of Human Intuition and Expertise in the Era of the Computer
1987 · 3.184 Zit.