OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 10.05.2026, 02:51

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Ethics in Machine Learning and Artificial Intelligence

2025·1 ZitationenOpen Access
Volltext beim Verlag öffnen

1

Zitationen

1

Autoren

2025

Jahr

Abstract

Recent theoretical and practical achievements in machine learning (ML) and, in particular, artificial neural networks, have motivated ethical questions about their deployment.Attempts to answer these questions can sometimes lead to new ways of doing ethics and sometimes to misuses of old ways.In this chapter, I critically examine the nature of doing ethics in and for contemporary ML and artificial intelligence (AI).This will not be a complete treatment of the area, and many important topics will be left aside.Instead, I will keep to the topics that are central and aim to show that ethics is done best when integrated as a philosophical discipline.In the first section, I will begin with a brief technical discussion of the nature of the technology, before addressing some prominent epistemological problems that either underlie or exacerbate common ethical problems regarding bias and fairness.In the second section, I discuss the moral status of AI, and how it bears on the problems of responsibility gaps and alignment.In the third and final section, I discuss the use of ethical theory in AI, and attendant problems of ethicswashing, the practice of appearing to be concerned to follow ethical approaches but for purely instrumental reasons. Machine Learning and Artificial IntelligenceThe current most efficacious paradigm of AI technology is that of ML.This is distinguished from what came to be called Symbolic AI or "Good Old-Fashioned AI" (GOFAI) (Haugeland, 1985).The distinction is based upon the differences in the approaches underlying these technologies.GOFAI relies on symbol manipulation, hardcoded logic, and structured databases, whereas ML relies on generalizations formed by algorithmically traversing training data.Comparisons have been made between this divide and the epistemological divide between rationalist and empiricist approaches to learning and knowledge (Buckner, 2024).GOFAI employs a high proportion of "native" programming and information built in, while ML relies much less on in-built programming by a human operator, because its models gain their programming in part by being trained on input data.The description "old fashioned" in the term GOFAI should not be taken to indicate that it is the oldest form of AI.In fact, the current and most advanced paradigm of ML employing artificial neural networks uses what are essentially the same mathematical representations as those introduced by McCulloch and Pitts (1943).An artificial neuron is essentially a simple computer that has a number of components: input signals, perhaps including a constant bias signal; weights on those signals that are simple multipliers; a summation of those weighted signals; and an activation function that "squashes" or constrains the outputs under a threshold or within some limits, for example, between 0 and 1 or -1 and 1, translating the summed signal to this range often in a nonlinear way.An artificial neural network is made by connecting many such artificial neurons, simple computers, together to form a network having one of various kinds of configurations.Such networks are capable of being trained, by modifying the weights, to produce certain outputs given certain inputs.This approach to ML underwent a revival due to a number of breakthroughs and attendant technological advances over the past 40 or so years.Chief among these was the discovery or rediscovery of algorithms for gradient descent via backpropagation; the availability of relatively cheap and powerful parallel processors, graphical processing units (GPUs); and the availability of vast amounts of data on which to train such systems, which is produced and distributed via the internet and computer systems generally.Essentially and non-technically, gradient descent can be read as "incrementally reducing error" and backpropagation as "by sending an error-correction signal back (by using the chain rule of calculus) through the (the derivatives of) the functions that produced the original output, in order to modify those functions."It is like descending an unfamiliar mountain in thick fog.One can tell in which direction lies the steepest descent locally from the bit of ground that one is standing on at that moment.The mountain, in the ML case, is part of a high-dimensional terrain.The availability of GPUs arose rather serendipitously, in virtue of the demand for better graphics in computer games.Both graphics processing and simulations of artificial neural networks involve matrix operations, which are best computed in parallel.So, if one has played computer games and bought personal computers with graphics cards over the past 30 or so years, one has also been subsidizing the development of ML, among other uses such as cryptocurrency mining.This has even come full circle as ML is now being used to upscale efficiently to high-resolution graphics.That is, pixels are being "predicted" rather than calculated formulaically.We have also come full circle with respect to data, which is "running out," in the sense that the networks have reached such a size that they are able to memorize all of the available data.The data is also becoming contaminated with the imperfect products of generative AI, which if left unchecked would lead to the degeneration of such techniques. Opacity and UnderdeterminationAdvances in ML have led to algorithms that are good at predicting outcomes of highly complex systems.They allow us to make largely accurate predictions and classifications at scale in cases where we would not otherwise be able to do so, or where the usual analytical models and simulations would be more resource intensive.However, there is some tendency to apply advanced ML methods to a problem even if a simpler method would achieve results that are just as good for practical purposes and be less resource intensive, for example, using a neural network to do simple linear regression, or asking a chatbot for the definition of a word that could be found quickly and accurately in a lookup table or dictionary.Such uses are

Ähnliche Arbeiten

Autoren

Themen

Ethics and Social Impacts of AIExplainable Artificial Intelligence (XAI)Artificial Intelligence in Healthcare and Education
Volltext beim Verlag öffnen