This article started as a comment to https://ieet.org/index.php/IEET2/more/20150427umbrello (by Steven Umbrello and Tina Forsee) that grew so long that it could not be posted there, so instead I decided to put it here as an article on my own blog and just comment that article by putting a link. This article, therefore, should be read in the context of the article it is commenting on.
Why is there ethics at all? Why is nothing wrong with taking a hammer and beating a stone, but everything wrong with taking a hammer and beating a human being? The reason is simple: the human being would suffer, the stone would not. The subjective experience of the human being is what makes ethics necessary.
We assume that animals, at least the more sophisticated ones, also have such a subjective side, so we would extend some of the ethical principles to them as well. There is obviously no point in giving a dog the right of free speech or free choice of religion, but we should not beat it. As we can see, the particular rights to be given to an experiencing unit depend on its special properties.
To qualify for ethical treatment, the shape and nature of the experiencing system must not have any significance. The dog is cute, the snake is not. Maybe the snake does not have any subjective experience; maybe it is, in that sense, a zombie. But as long as we do not understand how subjective experience and especially suffering arise, we should assume that a snake also can suffer. So it must be treated ethically. If we like something or not, if we find it ugly or cute, disgusting or wonderful or strange, must not play any role here. I would, therefore, question the validity of the approach taken in the article of Steven Umbrello and Tina Forsee because in the question for which entity we would prefer, it presupposes that such a preferrence is valid, and I doubt it is.
Being natural or artificial can also not be the criterion here. The distinction is a doubtful one. If I am standing at the banks of the river Rhine, am I seeing something natural or something artificial. At some places, the river banks consist of walls or banked up stones. In the age of genetic technology, building a living organism from scratch is getting within reach. Every artificial system has natural components and every natural system operates according to the same laws of physics. So this is a distinction that is not as fundamental as it might seem. But if a system we build artificially is sentient, that alone qualifies it for ethical treatment. So with respect to AI, the question is if sentience in a machine is possible.
If it is possible to build a machine in such a way that it is sentient, that it has subjective experience, we must apply rules of ethics to it. This would apply to an “artificial animal”, even if it is not intelligent. If it is intelligent, we would have to grant certain rights. It might be a form of intelligence that is totally unlike our own. An artificial intelligent system might not have a human body and might not have human-like experiences, but it might have a different form of experience that is different from our own and would require giving it some rights.
The question is: is it possible to build a machine that is sentient? The classical answer to this question was: no. Descartes introduced his concept of res cogitans, as opposed to res extensa, and (for what reason ever) restricted it to humans. From his point of view, beating a dog would have been all right because even if the dog showed signs of suffering, these where just a result of mechanical processes inside it. Leibnitz, in his Monadology, wrote about the question of sentient machines:
“It must be confessed, moreover, that perception, and that which depends on it, are inexplicable by mechanical causes, that is, by figures and motions, And, supposing that there were a mechanism so constructed as to think, feel and have perception, we might enter it as into a mill. And this granted, we should only find on visiting it, pieces which push one against another, but never anything by which to explain a perception. This must be sought, therefore, in the simple substance, and not in the composite or in the machine.”
So the classical answer is dualism. Philosophers introduced things like res extensa or monads. The sentient thing is something non-mechanical. If we look at an animal’s or human’s brain, we will only see neurons, electrical impulses, electro-chemical processes triggering other electro-chemical processes. We would find, in Leibniz’ words, “pieces which push one against another, but never anything by which to explain a perception”. This does not really solve the problem, and, as La Mettrie criticized, any attempt to really solve the problem was stopped by introducing such a special sentient substance, although the experiencing self is clearly affected by what happens in the material world, and can be changed by material influences, like, e.g. drugs.
So what if we start with the assumption that dualism is wrong? We reject Descartes and Leibniz. Let us start with a thought experiment (and there is actually a project to make this a reality, see http://asifoscope.org/2012/11/20/an-open-letter-to-the-human-brain-project/). Consider a very large computer system in which we build up a simulation of a human brain, on molecular level. So this simulation contains simulated neurons. Simulated nerve impulses are traveling through those neurons.
Assume a natural, human brain having a certain experience, let’s say you smell a rose. This sensation in the olfactory bulb creates a certain pattern of neural impulses going through the brain. If dualism is wrong, this pattern of neural impulses must somehow generate the subjective experience of the scent of a rose. Now, if we cause the corresponding, simulated pattern of nerve impulses in the simulated brain, would there be the same subjective experience?
The neurons are not accessible to our subjective experience. Molecules in them are exchanged by others all of the time as blood is flowing through the brain and exchanges molecules with it. The microscopic properties of the neurons are constantly changing. These changes, as far as they are part of the normal brain metabolism, do not affect the subjective experience. So there are changes that can happen in the molecular structure that do not affect the conscious being. There is, with other words, a horizon of accessibility. Events behind that horizon are not accessible to the conscious being. They happen in its “automatic infrastructure”.
Such horizons also exist in technical systems. If you publish an article on a blogging platform like WordPress, for example, the technicians might move your data from one server to another one, from a hard disk to flash memory or optical disk, from a server in America to one in Europe, without you noticing any change. Some properties of the system are hidden. They are behind a horizon of accessibility from the point of view of the user.
Now look at a process of thinking or experience in the natural and in the simulated brain. Assume that the biological brain is sentient. If dualism is wrong (as we have assumed), the consciousness cannot be something that could just as well not be there (i.e. zombies are impossible). If the subjective experience is there in the natural brain, it must also be there in the simulated brain, and it would have to be subjectively indistinguishable from the natural brain because the fact that its cells and the molecular components of these cells are simulated would be behind its horizon of accessibility. Switching from the biological neuron to the simulated neuron would be like switching from hard disk to flash memory technology on the blogging platform. The physical system is now a totally different one, but from the point of view of the emulated system, nothing changes. The consciousness in the biological brain would have no possibility to distinguish if it is “running on” biological neurons or on neurons simulated in a computer and there is no reason why that consciousness should disappear if we exchange the technology in terms of which the brain is implemented. The subjective experience of the smell of a rose must, therefore, be exactly the same in the biological and in the simulated brain.
But if a simulated brain inside a computer system could be sentient, that means that sentience or consciousness inside a machine is a real possibility. We must take into account the possibility that artificial systems, including non-biological ones, have a subjective side. As the example of the dog has shown, what rights exactly one has to grant to a system depends on additional features of the system. If an artificial sentient system has no human body and lives in a totally different experiential world, the rights to be granted to it might have to be different, and there are unprecedented philosophical questions coming up at this point. (However, not having a body does not take certain rights away. The brain simulation the “Human brain project” is trying to create would be equivalent to creating a newborn baby with locked in syndrome. In my view, that would be a cruel thing to do and I regard the project as intrinsically unethical, see my open letter referenced above).
Some remarks outside ethics: The idea of the horizon of accessibility provides an approach to solve the Leibniz-Gap-Problem The consciousness in the brain exists as an independent ontological entity exactly because of the existence of this horizon. The mechanical components pushing each other, to use the mill image of Leibniz, do not exist (i.e. are imperceptible) from the point of view of that system. It is emulated by the biological or technical system (I am using the term emulation here in a sense similar to the one used in computer science). Such an emulated system can be described and understood independently of the physical properties of the emulator it is running on. If it is an information processing system, a system that can be described as an observer, it might have a perception of itself, i.e. it might exist from its own point of view. It is not identical to the system that emulates it (we could exchange components in the underlying system without affecting the emulated system). So there is an “ontological transition” (https://creativisticphilosophy.wordpress.com/2014/07/02/generating-objects-towards-a-procedural-ontology/), from a world of neurons to an emulated, but nonetheless real, world of perceived phenomena, a world that exists from its own point of view and thus has a degree of ontological autonomy. This is not simply reductionism. The system can be described without referring to the underlying system. I do not claim that this is the whole story, the “hard problem” partially remains, but it offers an approach. The experiences are not, as the essentially dualistic zombie-theory assumes, something that could just as well not be there. We are thinking about our subjective experiences, so in order to understand how they are possible, we would have to study how such thoughts about the experiences can come about inside an experiencing system, and such studies might open up the way to solving the problem.
Since Umbrello and Forsee started their article with some science fiction, let me end mine also with some SF, of my own making: I have written about these topic in http://asifoscope.org/2013/05/03/jupiter/.
(The picture is from