HOMO IN MACHINA: ETHICS OF THE AI SOUL – A Response

File:Vienna - Vintage Franz Zajizek Astronomical Clock machinery - 0518.jpg

This article started as a comment to http://theleatherlibraryblog.com/homo-in-machina-ethics-of-the-ai-soul/ (by Steven Umbrello and Tina Forsee) that grew so long that it could not be posted there, so instead I decided to put it here as an article on my own blog and just comment that article by putting a link. This article, therefore, should be read in the context of the article it is commenting on.

Why is there ethics at all? Why is nothing wrong with taking a hammer and beating a stone, but everything wrong with taking a hammer and beating a human being? The reason is simple: the human being would suffer, the stone would not. The subjective experience of the human being is what makes ethics necessary.

We assume that animals, at least the more sophisticated ones, also have such a subjective side, so we would extend some of the ethical principles to them as well. There is obviously no point in giving a dog the right of free speech or free choice of religion, but we should not beat it. As we can see, the particular rights to be given to an experiencing unit depend on its special properties.

To qualify for ethical treatment, the shape and nature of the experiencing system must not have any significance. The dog is cute, the snake is not. Maybe the snake does not have any subjective experience; maybe it is, in that sense, a zombie. But as long as we do not understand how subjective experience and especially suffering arise, we should assume that a snake also can suffer. So it must be treated ethically. If we like something or not, if we find it ugly or cute, disgusting or wonderful or strange, must not play any role here. I would, therefore, question the validity of the approach taken in the article of Steven Umbrello and Tina Forsee because in the question for which entity we would prefer, it presupposes that such a preferrence is valid, and I doubt it is.

Being natural or artificial can also not be the criterion here. The distinction is a doubtful one. If I am standing at the banks of the river Rhine, am I seeing something natural or something artificial. At some places, the river banks consist of walls or banked up stones. In the age of genetic technology, building a living organism from scratch is getting within reach. Every artificial system has natural components and every natural system operates according to the same laws of physics. So this is a distinction that is not as fundamental as it might seem. But if a system we build artificially is sentient, that alone qualifies it for ethical treatment. So with respect to AI, the question is if sentience in a machine is possible.

If it is possible to build a machine in such a way that it is sentient, that it has subjective experience, we must apply rules of ethics to it. This would apply to an “artificial animal”, even if it is not intelligent. If it is intelligent, we would have to grant certain rights. It might be a form of intelligence that is totally unlike our own. An artificial intelligent system might not have a human body and might not have human-like experiences, but it might have a different form of experience that is different from our own and would require giving it some rights.

The question is: is it possible to build a machine that is sentient? The classical answer to this question was: no. Descartes introduced his concept of res cogitans, as opposed to res extensa, and (for what reason ever) restricted it to humans. From his point of view, beating a dog would have been all right because even if the dog showed signs of suffering, these where just a result of mechanical processes inside it. Leibnitz, in his Monadology, wrote about the question of sentient machines:

“It must be confessed, moreover, that perception, and that which depends on it, are inexplicable by mechanical causes, that is, by figures and motions, And, supposing that there were a mechanism so constructed as to think, feel and have perception, we might enter it as into a mill. And this granted, we should only find on visiting it, pieces which push one against another, but never anything by which to explain a perception. This must be sought, therefore, in the simple substance, and not in the composite or in the machine.”

So the classical answer is dualism. Philosophers introduced things like res extensa or monads. The sentient thing is something non-mechanical. If we look at an animal’s or human’s brain, we will only see neurons, electrical impulses, electro-chemical processes triggering other electro-chemical processes. We would find, in Leibniz’ words, “pieces which push one against another, but never anything by which to explain a perception”. This does not really solve the problem, and, as La Mettrie criticized, any attempt to really solve the problem was stopped by introducing such a special sentient substance, although the experiencing self is clearly affected by what happens in the material world, and can be changed by material influences, like, e.g. drugs.

So what if we start with the assumption that dualism is wrong? We reject Descartes and Leibniz. Let us start with a thought experiment (and there is actually a project to make this a reality, see http://asifoscope.org/2012/11/20/an-open-letter-to-the-human-brain-project/). Consider a very large computer system in which we build up a simulation of a human brain, on molecular level. So this simulation contains simulated neurons. Simulated nerve impulses are traveling through those neurons.

Assume a natural, human brain having a certain experience, let’s say you smell a rose. This sensation in the olfactory bulb creates a certain pattern of neural impulses going through the brain. If dualism is wrong, this pattern of neural impulses must somehow generate the subjective experience of the scent of a rose. Now, if we cause the corresponding, simulated pattern of nerve impulses in the simulated brain, would there be the same subjective experience?

The neurons are not accessible to our subjective experience. Molecules in them are exchanged by others all of the time as blood is flowing through the brain and exchanges molecules with it. The microscopic properties of the neurons are constantly changing. These changes, as far as they are part of the normal brain metabolism, do not affect the subjective experience. So there are changes that can happen in the molecular structure that do not affect the conscious being. There is, with other words, a horizon of accessibility. Events behind that horizon are not accessible to the conscious being. They happen in its “automatic infrastructure”.

Such horizons also exist in technical systems. If you publish an article on a blogging platform like WordPress, for example, the technicians might move your data from one server to another one, from a hard disk to flash memory or optical disk, from a server in America to one in Europe, without you noticing any change. Some properties of the system are hidden. They are behind a horizon of accessibility from the point of view of the user.

Now look at a process of thinking or experience in the natural and in the simulated brain. Assume that the biological brain is sentient. If dualism is wrong (as we have assumed), the consciousness cannot be something that could just as well not be there (i.e. zombies are impossible). If the subjective experience is there in the natural brain, it must also be there in the simulated brain, and it would have to be subjectively indistinguishable from the natural brain because the fact that its cells and the molecular components of these cells are simulated would be behind its horizon of accessibility. Switching from the biological neuron to the simulated neuron would be like switching from hard disk to flash memory technology on the blogging platform. The physical system is now a totally different one, but from the point of view of the emulated system, nothing changes. The consciousness in the biological brain would have no possibility to distinguish if it is “running on” biological neurons or on neurons simulated in a computer and there is no reason why that consciousness should disappear if we exchange the technology in terms of which the brain is implemented. The subjective experience of the smell of a rose must, therefore, be exactly the same in the biological and in the simulated brain.

But if a simulated brain inside a computer system could be sentient, that means that sentience or consciousness inside a machine is a real possibility. We must take into account the possibility that artificial systems, including non-biological ones, have a subjective side. As the example of the dog has shown, what rights exactly one has to grant to a system depends on additional features of the system. If an artificial sentient system has no human body and lives in a totally different experiential world, the rights to be granted to it might have to be different, and there are unprecedented philosophical questions coming up at this point. (However, not having a body does not take certain rights away. The brain simulation the “Human brain project” is trying to create would be equivalent to creating a newborn baby with locked in syndrome. In my view, that would be a cruel thing to do and I regard the project as intrinsically unethical, see my open letter referenced above).

Some remarks outside ethics: The idea of the horizon of accessibility provides an approach to solve the Leibniz-Gap-Problem The consciousness in the brain exists as an independent ontological entity exactly because of the existence of this horizon. The mechanical components pushing each other, to use the mill image of Leibniz, do not exist (i.e. are imperceptible) from the point of view of that system. It is emulated by the biological or technical system (I am using the term emulation here in a sense similar to the one used in computer science). Such an emulated system can be described and understood independently of the physical properties of the emulator it is running on. If it is an information processing system, a system that can be described as an observer, it might have a perception of itself, i.e. it might exist from its own point of view. It is not identical to the system that emulates it (we could exchange components in the underlying system without affecting the emulated system). So there is an “ontological transition” (https://creativisticphilosophy.wordpress.com/2014/07/02/generating-objects-towards-a-procedural-ontology/), from a world of neurons to an emulated, but nonetheless real, world of perceived phenomena, a world that exists from its own point of view and thus has a degree of ontological autonomy. This is not simply reductionism. The system can be described without referring to the underlying system. I do not claim that this is the whole story, the “hard problem” partially remains, but it offers an approach. The experiences are not, as the essentially dualistic zombie-theory assumes, something that could just as well not be there. We are thinking about our subjective experiences, so in order to understand how they are possible, we would have to study how such thoughts about the experiences can come about inside an experiencing system, and such studies might open up the way to solving the problem.

Since Umbrello and Forsee started their article with some science fiction, let me end mine also with some SF, of my own making: I have written about these topic in http://asifoscope.org/2013/05/03/jupiter/.

(The picture is from

https://commons.wikimedia.org/wiki/File:Vienna_-_Vintage_Franz_Zajizek_Astronomical_Clock_machinery_-_0518.jpg.)

Advertisements

22 comments

  1. Reblogged this on Diotima's Ladder and commented:
    Many thanks to Nannus for giving us this thoughtful reply. Please check out his response!

    1. My article contains a link to yours. I have also added a comment under your article with a link to my article. If you want to put a link at a more prominent place, its OK with me.

      1. Great! I was just thinking of adding a link to your blog post on the IEET site where the article was originally published. I would just add it in the comments section so anyone reading from there could come to your blog.

        1. Just go ahead!

          1. Thanks!

            I’ll comment soon…just took Geordie for a walk and now I’m recovering. 🙂

  2. Oh I knew that snake stuff would come back to bite me later! 🙂

    “But as long as we do not understand how subjective experience and especially suffering arise, we should assume that a snake also can suffer. So it must be treated ethically.”

    Oh of course I agree. I was being totally irrational when I said those things about taking vengeance on the “evil” snake who bit Geordie.

    “If we like something or not, if we find it ugly or cute, disgusting or wonderful or strange, must not play any role here.”

    I agree with this too, but by “anthropomorphic” I meant it in a broader sense. Why do we assume that a stone doesn’t feel pain when we kick it down the street? My answer is that it is so unlike us and exhibits nothing that resembles suffering when we kick it. So I took the natural world and our treatment of it as a platform or lens through which to view something artificially created. We of course all disagree about ethics in the natural world. Some people think it’s wrong to kill flies, etc. So the “intuition pump” was not meant to be prescriptive, but descriptive. I wanted to allow for all kinds of strange new ways for AI to be similar to us, but this may have been lost in the natural-artificial thought pump.

    “I would, therefore, question the validity of the approach taken in the article of Steven Umbrello and Tina Forsee because in the question for which entity we would prefer, it presupposes that such a preferrence is valid, and I doubt it is.”

    Actually, we didn’t make the argument that our intuitions on these matters are valid. Strictly speaking, we don’t know that the stone doesn’t feel pain. We are forever confined to our own subjective experience and assumptions, and that’s the measuring stick, although I wanted to show that it’s a loose measuring stick. We have the capacity to take in creatures that are not exactly like us and treat them as the same, even when we rationally know the evidence of “sameness” is not quite there. We can be very generous in other words. (Conversely, we also have the capacity to be extremely narrow, as I was in the snake example.) However, our intuitions are not the same thing as reasoned argument. It could very well be that we should bestow rights to stones.

    “Being natural or artificial can also not be the criterion here. The distinction is a doubtful one. If I am standing at the banks of the river Rhine, am I seeing something natural or something artificial. At some places, the river banks consist of walls or banked up stones. In the age of genetic technology, building a living organism from scratch is getting within reach. Every artificial system has natural components and every natural system operates according to the same laws of physics. So this is a distinction that is not as fundamental as it might seem.”

    The natural-artificial divide is not something I meant as a guideline for creating laws for AI ethics. It was meant to be a description really, a description of us and how anthropomorphic tendencies in us really do guide us now—loosely—in determining rights. I was assuming on an as-if basis that IF AI becomes sufficiently advanced so that it can really fool us, we might take into consideration the fact that we don’t have access to other’s experiences as natural creatures, and we might just bestow ethical treatment to AI anyway. However, that natural bias is real. Siri outsmarts my Geordie in many ways, but I wouldn’t give Siri any rights. Suppose a lizard or, shall we say, a snake, came up and started behaving like Siri. I’d first of all question my mental health, but brushing that aside, I’d probably alter my estimation of snakes and their rights. Yet with Siri, knowing it’s artificial, that changes things. Siri as it is now does not deserve rights at all, yet that snake that bit Geordie does to some degree. (Even though I wanted to kill it.) 🙂

    Like I said, not a guideline, just a description. On the other hand, what would be a guideline? That’s a much harder question. And do we really have one now with natural creatures? And if we do, isn’t it loose, and doesn’t it depend on anthropomorphic tendencies in us? How do animal rights get understood? I find all of this quite blurry, half-reasoned. At the same time I don’t know what a more rational guideline would be.

    I don’t mean to say that we should just do whatever we feel. This is dangerous. The snake thing…let’s call that my irrationality. Although even there we see the flip side of the coin in anthropomorphic intuition. I was defending the pack, so to speak. The thing that makes us see sameness is also what makes us see difference. All kinds of injustice stem from this intuition I speak of, so we must be careful.

    “So with respect to AI, the question is if sentience in a machine is possible.”

    This is a good question, but in our paper we assume we can’t know. Turing points out that we don’t know the sentience of each other, and so we took this as the starting point. However, we also wanted to respect that behavior is not ALL that matters.

    “If dualism is wrong (as we have assumed), the consciousness cannot be something that could just as well not be there (i.e. zombies are impossible). If the subjective experience is there in the natural brain, it must also be there in the simulated brain, and it would have to be subjectively indistinguishable from the natural brain because the fact that its cells and the molecular components of these cells are simulated would be behind its horizon of accessibility.”

    It sounds like you take it that once we simulate the human brain, all the accompanying subjective experiences will be there too. I don’t make this assumption. I don’t really know. And this is one of those arguments that could go on for a very long time.

    I really didn’t want to get into the zombie stuff because I thought it would be better to show that we needn’t worry about the zombie problem too much in bestowing rights. The zombie problem comes about in a mode of doubt. What I was trying to show was that that doubt is something we don’t really have when it comes to natural creatures in everyday life, but for AI it would obviously have to be there to some extent. But then there might be a point at which the doubt becomes suspended for many reasons, for behavioral and physical aspects or some combination. Will we ever know that AI is sentient? I assume that we won’t with certainty, that the debate could just go on and on. Instead I prefer to think of the question as a matter of degree of believability with various factors at play.

    BTW, I like your phrase “horizon of accessibility”…it sounds kind of like Husserl. 🙂

    Also, not sure Leibniz was a dualist. He rejected materialism, but also Cartesian dualism. Perhaps we’ll leave that one for another day. Monads are weird.

    Thank you for taking the time to write this. Your thoughts mean a lot to me. I know you have done a lot more reading in the matter of AI than I have, and I was very eager to hear your response.

    1. I’ll read this later. I am watching snooker (end game world championship). 🙂

      1. I had to look up “snooker”…sounds fun! Enjoy the game.

    2. Sorry I brought up the snake example; I thought that was a good real world example, but maybe I should have left it. 🙂 How is Geordie, by the way?

      I think basically there are two different approaches one can take here. One is to look from the outside, in terms of similarity or lack of similarity. That seems to be the approach you are taking. It is also the approach taken by Turing with his Turing test.

      Example: “Why do we assume that a stone doesn’t feel pain when we kick it down the street? My answer is that it is so unlike us and exhibits nothing that resembles suffering when we kick it.”

      The problem here is that there is a lot of subjective and intuitive judgement involved and we do not really have a basis yet to make this more objective. I view your article as an attempt to move into such a direction, to clear the matter a little bit more.

      The other approach is to try to develop a theory of what consciousness and sentience are and how they work, or at least some ideas in the direction towards such a theory. Based on such a theory, one could then try to tackle the question. The stone does not suffer because none of the processes that must go on inside a sentient system is going on inside a stone, etc. That is the kind of approach I am trying here. I do not have a fully developed theory of consciousness, but I think I have an approach, a few insights at least about how I would try to develop such a theory (the concepts of “emulation”, “horizon of accessibility” etc. belong into this context). The theory is rudimentary, but based on these rudiments, I think I can build some arguments, at least in some cases (e.g. when I am arguing why I think the “Human Brain Project” is unethical, see my post on that topic).

      The problem here is that the theory is incomplete and debatable, and many people may not buy into it. If I would succeed in developing this theory further, would there ever be a possibility to check if it is true? This would be a theory about the subjective side of any “observer”, no matter if that observer is a human being, a complex or simple animal, another organism, a machine, a computer or software system inside a computer or some other physical system. We only have access to our own consciousness.

      I think these approaches do not exclude each other. I think it is necessary to move on in both directions. One reason is that we are face with the possibility of artificial systems that might require some kind of ethical treatment; the other reason is that I think our ethical approach towards animals is murky. Descartes restricted his Res cogitans to humans (without really justifying this), and as a result, animals just dropped out of ethics.

      1. “…One is to look from the outside, in terms of similarity or lack of similarity. That seems to be the approach you are taking. It is also the approach taken by Turing with his Turing test.”

        Yeah, I was basing my approach on Turing, but I hoped to tweak it a bit to allow for more than just behavior or external signs. I wanted to leave that element a bit more open, without getting into those harder questions and without going into too much detail. First of all, it’s safer territory for me. I don’t really know what I think about it all, but taking on the question of what makes a being sentient seemed too be too much for me.

        “The problem here is that the theory is incomplete and debatable, and many people may not buy into it. If I would succeed in developing this theory further, would there ever be a possibility to check if it is true?”

        Yeah, these are tough questions. Even the question of checking if such a theory could possibly hold in the philosophical court would take quite a lot of space. I think the kind of thing you’re doing would deserve enough space for a book…or two. Or five. 🙂

        “I think these approaches do not exclude each other. I think it is necessary to move on in both directions.”

        I totally agree.

        No worries about the snake example…I was just teasing you. Geordie’s doing great! He’s back to his old self, except maybe a little more clingy. He follows me all over the place. I love it though. 🙂

        Speaking of anthropomorphic tendencies, I’ve taken to singing Mr. Rogers and La Vie en Rose to him. He knows when I start going into “It’s a beautiful day in the neighborhood” it means it’s time to go for a walk (I put on my shoes and everything, just like Mr. Rogers.) Then at night I sing La Vie en Rose in a lullaby voice, and that means it’s bedtime.

    3. A few more words about AI and what I think about it (I have adopted some of these views from my friend Kurt Ammon, however, my ideas about consciousness are my own and my views might differ from Kurt’s in some details):

      • Personally, I think the Turing Test is not very useful. The ability of a program, e.g. a chat-bot, to fool a human user into believing it is a human, does not say much about its intelligence.
      • I do not think that there really is AI yet. There is a research direction “AI” and there are systems coming out of that that are hyped to be “intelligent”, but they are not, in my opinion. Systems like SIRI are collections of special purpose algorithms and these are hand-programmed by people. These systems have no
      • Intelligence and sentience are two different things. Some animals might have subjective experiences of some kind and an ability to suffer, without having much intelligence. The same might be true for artificial systems. There seems to be a connection between intelligence and sentience, but it is not clear what it is, although I think it has something to do with reflexivity. Reflexivity seems to be necessary to be intelligent and it seems to be necessary for consciousness as well. However, these are half-baked thoughts.
      • In my opinion, in order to be intelligent, a system must be creative. This means the ability to move out of the scope of any formal description you can make about the system. Such a system cannot be described as an algorithm in the sense of a Turing-machine. One can think of this as the ability to compute functions that are not Turing-Computable. Such a system is not describable completely in terms of any finite set of fixed laws, because it can invent its way out of any such description. Such a system is able to extend itself. Such a system is fallible.
      • AI (as a research paradigm) has failed to produce such systems. AI has not even tried to. They did not even see the problem. They assume that the so-called Church’s Thesis is correct and thus that everything you can do with a computer is what you can do with Turing machines. I think that Ammon has shown that this view is wrong. Artificial creative systems are possible. AI is trying to create intelligent algorithms, but that is not possible. AI has no clear concept of what intelligence is. They define it in the way of “doing things that humans do” etc. but there is no definition. AI got on the wrong track during the 1950s and 1960s already. Standard AI, in my view, is an example of pathological science in the sense of Kuhn.
      • Note that my views of what intelligence is and what artificial intelligence that deserves this name would be are outsider’s views. I am not part of the AI research community and I am not part of the academic establishment of AI. Most AI people would probably regard my views as crazy or nonsensical. I am a total outsider there.
      • That is why I am calling what I am doing “creativistic philosophy” rather than AI. It is philosophy rather than science because as long as science is understood as explaining systems in terms of laws, you have a problem when you are trying to describe systems for which every description is incomplete in principle. But that limited concept of science is part of the problem. If you look into cultural studies, the humanities etc. it is very clear that complete descriptions in terms of fixed laws are impossible. Human cultures develop historically. The AI community is ignoring that. It is trying to put psychology into the science-side of the science/humanities divide and that does not work because of the creative nature of human cognition.

      1. “Personally, I think the Turing Test is not very useful. The ability of a program, e.g. a chat-bot, to fool a human user into believing it is a human, does not say much about its intelligence.”

        I think the test is interesting in its philosophical implications, and useful in a certain area, but I agree that it doesn’t actually tell us whether something is actually sentient or intelligent in itself.

        “I do not think that there really is AI yet. There is a research direction “AI” and there are systems coming out of that that are hyped to be “intelligent”, but they are not, in my opinion. Systems like SIRI are collections of special purpose algorithms and these are hand-programmed by people.”

        I’ve been taking the term “artificial intelligence” in a much looser sense. I don’t know whether I’m misunderstanding the phrase or just using it in a popular way…I’m new to this whole thing.

        “There seems to be a connection between intelligence and sentience, but it is not clear what it is, although I think it has something to do with reflexivity.”

        I definitely take those things as separate things. How these all come together…I have no idea.

        “AI has no clear concept of what intelligence is. They define it in the way of “doing things that humans do” etc. but there is no definition.”

        I’m really unclear about what constitutes “intelligence” in the AI realm. In some cases people use that word to mean “intelligent like people” or even sentient. In other cases, the word doesn’t mean that. I don’t quite know what the word means. I agree that a better definition is needed. It sounds like that’s being debated, that the very definition relies on so many other factors that have yet to be solved.

        “Note that my views of what intelligence is and what artificial intelligence that deserves this name would be are outsider’s views. I am not part of the AI research community and I am not part of the academic establishment of AI. Most AI people would probably regard my views as crazy or nonsensical. I am a total outsider there.”

        Well, me too! I don’t think you’re nonsensical, although I don’t know if that’s any consolation since I’m outside the AI community.

        1. It’s useful to distinguish between what used to be called “hard” AI (and is now called “general” AI, I believe) versus “soft” AI. The latter are expert systems, such as Siri or the Jeopardy program (Jason?), that are designed to solve specific problems.

          The former is the attempt to replicate something similar to human consciousness, and they are very different fields. Expert systems have made tremendous progress, but hard AI seems to be like fusion power — forever “twenty years” away.

          Personally, I think (hope) it’s a non-starter, but then I’m a bit of a spiritual dualist. XD

          1. I think that “soft AI” does not really deserve the name AI. These are algorithms that contain a limited amount of knowledge, like any other normal software system, to do special things. I think the research paradigm of AI restricted itself to such systems back in the 1950s by adopting the belief that Church’s thesis is right and everything you can do with computers is describable in terms of algorithms. What they really wanted to do was “hard” AI, but they could not and then they had to justify themselves, and “soft” AI was born. But I think it is just hype.

            The real problem is: how does new knowledge arise, or, with other words, how do you build a system that can reprogram itself, so it would not be restricted to any (initial) algorithm you program inside it. A system that can be described by an algorithm can be described by a corresponding formal theory. There is a complete formal description of it. This also holds the other way around: if there is a complete formal description, you can formulate that as an algorithm. A system that can generate new knowledge would therefore be able to move outside of the scope of any formal theory you can suggest about it. Any formal (i.e. exact and explicit) description of such a system would be incomplete. Following an idea suggested of my friend Kurt Ammon, I am calling such system “creative”. Kurt has proven that church’s thesis is wrong (see https://creativisticphilosophy.wordpress.com/2013/02/06/an-e%EF%AC%80ective-procedure-for-computing-uncomputable-functions/. I have also seen an even simpler but not yet published version of the proof given there.) This means that systems that are creative in the sense described above are actually possible. It means that when we are thinking about how intelligence works, we need not restrict ourselves to algorithms. The funny feature of such systems would be that they cannot be understood completely because they can modify the way they work. They have a historical aspect. If you define science as describing systems in terms of their laws, a scientific description of such systems is not possible because each such description is incomplete. However, this is the situations the humanities have been dealing with all of the time: you have systems that cannot be defined once and for all, that change historically and for which a complete, exact description is not possible. So what we need is not a way to reduce cultural and psychogical phenomena to the realm of science (understood as description of things in terms of fixed laws) but instead an extension of the concept of science into a concept of “Wissenschaft” (to use the German term that actually has this broader meaning) that is not so restricted.
            The reason nothing substantial in the sense of hard AI has come out of AI research in almost 60 years is that, right from the beginning, they restricted themselves to algorithms, i.e. to formal theories and systems that can be described in terms of formal theories. As a result, I think, they defined intelligence out of the scope of their research, and consequently could never find it. They did not even see the real problem.

            Personally, I think artificial intelligent systems are possible. I am not sure we could control them (they could develop out of what we put in initially, and go their own ways, so they might be not so useful) and if it would be ethical to try to control them is indeed an interesting question.

            “…I’m a bit of a spiritual dualist.”

            Personally, I am not a dualist, although I think that emulated systems have their own reality, so there is more than one valid ontology. Maybe that is not classical monism, but I don’t have a problem with that.

            I think it is perfectly possible to have some kind of spirituality based on such a theory of “point of view”. I have written about that here: http://asifoscope.org/2013/04/27/experiencing-sacredness/

            1. “I think that “soft AI” does not really deserve the name AI.”

              I quite agree! (I prefer the term “expert system.”)

              “…the belief that Church’s thesis is right…”

              Just to be sure I’m clear on what you mean here: AIUI, the Church-Turing thesis is about the equivalence of computable functions. It can apply to theories of mind in the sense that IF mind is a computable algorithm, THEN a machine can replicate that algorithm.

              So by ‘belief in that thesis’ you refer to the belief that mind is an algorithm?

              “The real problem is: […] how do you build a system that can reprogram itself, so it would not be restricted to any (initial) algorithm you program inside it.”

              Yes, exactly so. We have systems that can self-modify, but only according to their programming. This seems quite different from what human minds do.

              “Any formal (i.e. exact and explicit) description of such a system would be incomplete.”

              It invokes a bit of Gödel’s incompleteness (not to mention Turing Halting Problem). I love the idea that a complex system may have true things that cannot, even in principle, be calculated. I find the flip side of Gödel interesting as well. What if the “math” that describes mind is not consistent? (Which would explain a great deal about humans!) Such a “mathematics” might also be non-computable.

              “I am calling such system ‘creative’.”

              Yes! That’s a big part of what I’d require from true AI to consider it intelligent. Creativity, imagination, curiosity, and soul (as in soulful).

              “Kurt has proven that church’s thesis is wrong”

              I’ll have to visit that page when I get a chance. That’s a surprising assertion.

              “This means that systems that are creative in the sense described above are actually possible.”

              For some definition of “system” this is obviously true. We’re creative! 🙂

              The question is how to replicate that mechanically, since mechanical systems are — thus far — entirely deterministic. Which is to say, algorithmic. As you’ve said several times, this really all depends on a theory of mind, and we just don’t have one.

              I’m not sure AI has failed due to bad choices so much as being forced to follow the only choice currently available. Neural networks hold some promise of creating an evolving meta-system built on a system (such as, perhaps, human mind is built on human brain). And it will be interesting to see what comes of attempts to duplicate the human brain architecture.

              My personal belief (or maybe hope) is: not much.

    4. “Oh I knew that snake stuff would come back to bite me later!”

      Ouch!! XD

  3. Human, dog, snake, lobster, spider, ant, bacteria. What is the threshold for experiencing suffering? How does suffering differ from biological damage reaction and avoidance?

    These seem significant and challenging questions.

    What about a human incapable of feeling any pain? Would beating such a one with a hammer still be wrong? It seems clearly so, so suffering may be neither sufficient nor necessary for ethical considerations.

    I agree “natural” versus “artificial” cannot be a criterion, and — in terms of the question of sentience — neither is the substrate nor the means of production.

    I also agree a key question is whether it possible to build a machine that is sentient.

    These is an (admittedly science fictional) idea that consciousness arises as an emergent property of the incredibly complex standing wave of electrical firing that resonates in the cavity of our skulls.

    While that is an SF notion, it does raise an interesting question about how exactly “what it is like” to be human occurs and whether it may be related to biological or physical aspects of the brain. The “brain in a skull” is the pattern of all (apparently) conscious animals. (Conscious in the sense of environmental awareness and interaction.)

    It’s also possible that consciousness arises due to the neural oscillations (“brain waves”) that produce such interesting EEG patterns. The speed of thought may be instrumental to conscious experience.

    If either of these are true, then no simulation by a distributed, slower machine can succeed. (An older version of the “human brain” simulation involves the population of China with each person acting as a brain cell. They exchange messages exactly as neurons do. The idea is, obviously, the question: where is “experience” in that scenario?

    (As an aside, I’m not sure about “locked in” concerns — assuming the computer project works. Wouldn’t such a consciousness have input and output systems that allow perception and communication?)

    I do agree with something you said in the comments: I don’t think consciousness is algorithmic. Both the Turing Halting Problem and Gödel’s Incompleteness Theorems address the inability to calculate certain truths. As such, I’m not sure any machine process can ever be truly sentient.

    The trick will be, again as you suggest, a system that rises above itself, that can create new things when confronted with new situations. That is, after all, a key distinguishing characteristic of humans and of highly intelligent animals (such as the Corvidae).

    1. “Human, dog, snake…” – Good questions. I think to answer such questions we need a theory of consciousness, which we do not have at this time. I can only say here in which direction my own thoughts about such a theory are moving.

      “What about a human incapable of feeling any pain? …” – Excellent point. Obviously, pain is not a sufficient criterion. Obviously the matter is not simple. For example, it is clearly wrong to kill somebody who is in an unconscious state (e.g. sleeping). But I think that a theory of consciousness would be helpful to develop ethical guidelines, especially with respect to non-human entities like animals or AI systems. This does not mean that we would be able to derive what ought to be done from how things are, since a basic principle that we want to minimize suffering, that we should not inflict suffering etc. are still ethical judgements that cannot be derived from how things are (and some egoistic or psychopathic people might disagree), but having such a theory could be helpful.

      1. “For example, it is clearly wrong to kill somebody who is in an unconscious state (e.g. sleeping).”

        Yes. Or, worse, locked-in syndrome. A sleeper can be wakened and asked to prove their consciousness. Or what about someone (like my dad) who is deep into Alzheimer’s and is basically just a body.

        Multiple factors come into play, don’t they. Our view of individual humans is part of our view of humanity, which has clearly demonstrated its intellectual capacity. Being “in the club” so to speak grants the privileges of that club.

        There is also, as you’ve pointed out, the idea of any sentient being experiencing suffering.

        But to the real point: Totally agree we need to do some serious thinking about what it means to be intelligent. Should we achieve true AI, it’s going to be a crucial question

    2. “These is an (admittedly science fictional) idea that consciousness arises as an emergent property of the incredibly complex standing wave of electrical firing that resonates in the cavity of our skulls…” I actually think that the physical basis and the absolute speed of the events do not matter. In the thought experiments with the simulated brain, the Chinese simulation, or Leibniz’ mill, the question is always: “where is ‘experience’ in that scenario”.

      I have two points to make here:

      • We do not only have conscious experiences, we also think and talk about them. So somehow, information from that experience enters our explicit, propositional thinking. I therefore think that the idea (of, e.g. the “Zombie” argument) that the conscious experience could just as well not be there is wrong. If we could find out more how the perception of consciousness, i.e. the transfer of information about it into thought, takes place, we would understand something about what experience is. At the moment, this is nothing more than an intuitive thought, but I think one could look for an approach into research in this direction.

      • If we look at it from the outside, the experience is not there. From the outside, all we see is a bunch of neurons and signals between them. That is actually the situation we are in when we look at other people: we do not experience their experiences. However, in describing a system, we can describe it in terms of different layers of description, and there is no reason why we should prefer any of these descriptions and take it as the “real one”. You may have an emulated system emulated in terms of “low-level” neuronal processes. In this emulated system, there might be an observer (i.e. a system capable of storing information) and this system observes not the low-level system it is implemented in but facts from its emulated “high-level” world. If such an internal observer can also observe (some aspects of) itself, it would exist from its own point of view. Moreover, it would not be able to access facts of the low-level system directly. It could not prove that an “outside world” in terms of which it is implemented actually exists. It would be perceiving some things directly and then run against a “horizon of accessibility” beyond which it cannot see. For example, it would not be able to perceive any difference if some molecules or organelles inside its neurons move or if some neurons or their components are exchanged. In that sense, it is not identical to the neuronal system in terms of which it is implemented. From the outside, this is just another level of description, the internal observer exists “as if”. From the inside, however, the internal observer is what exists primarily and the outside point of view is just a theoretical possibility. My idea is that there is no bird’s eye view from which one can say that one of these points of view is real and the other is not. They are both real. Consciousness, I think, is the “being real” of that internal observer.

      Maybe I can clarify the idea a bit more by citing a lengthy passage from http://asifoscope.org/2012/11/20/an-open-letter-to-the-human-brain-project/, which was based on these ideas:

      “If we introspectively investigate our own consciousness, we will find that we are not aware of the neuronal or molecular processes and of a lot of the information processing underlying our cognitive and perceptive processes. The internal details of these processes are not introspectively accessible. There is a “horizon of accessibility” that we cannot see through “from the inside”. My point is that the nature of the “hardware” implementing the conscious mind is beyond that horizon, so the mind cannot distinguish if it runs on natural or simulated neurons. As a consequence, the nature of that “hardware” does not matter for the question of the ontological or ethical status of the resulting consciousness! A silicon-based mind therefore would have to be treated just like a biological one.

      A simple example from computer sciences might clarify my point: consider a text file stored on a hard disk. Now copy it to a USB stick or a CD-ROM. The physical representation of the file will be completely different in each case, but any application using the file cannot “see” these differences. The operating system and device drivers create a world of “emulated objects” whose properties can be described and understood independently from the physical system used to implement them. On a physical level of description, what you have are magnetic orientations of particles, small holes in the CD-ROM’s surface and so on, together with processes “reading” these features that are physically completely different from each other. From the application’s point of view, these differences are not accessible (and the usefulness of computers to a great extend comes from the possibility to create such emulated objects and “evert” them to us through a user interface). The application resides in a world of objects that are, in a way, independent of the underlying physics. It is an observer of emulated objects.

      You might say that this emulated world is just a layer of description and that “in reality” the only thing existing is the magnetized particles, electrical currents and so on and that the application we as programmers think of exists only as a description in our minds (while in reality, there is a physical machine executing machine instructions only). But a crucial aspect of consciousness seems to by its self-reflexivity. If the “application” can observe itself and is itself part of the world of emulated objects, being able to create descriptions of itself and the processes inside itself, it will be existing from its own point of view, thus acquiring an independent existence. We would then have, inside the system, an “internal observer” that is emulated by the system and that exists from its own point of view. And this observer would have a horizon of accessibility shielding the details of its implementation from its view.

      My idea is that our own consciousness is an internal reflexive observer of this kind, emulated by the neuronal processes in our brains. If you simulate these processes in a computer system, such an internal observer would be present too. The brain would be simulated, but the consciousness inside would be emulated, just like the one in a biological brain. The details of its hardware (biological vs. silicon) would be beyond its horizon of accessibility. This means that the resulting conscious mind would subjectively be just as the conscious mind of a biological brain having the simulated structure.”

      Now, I think that for the existence of an internal observer and for how the world looks from that observer’s point of view, the details of its implementation, including the speed or degree of parallelism, do not make a difference. These are properties of the implementing system that are not accessible to the emulated observer. The emulated observer cannot distinguish between such aspects of the underlying system, so they do not exist from its point of view.
      In a way our physical world may be viewed as a stack of emulations as well. On some level, there are only quarks, gluons, electrons etc., but no hadrons, no atoms and no molecules. However, it takes a big accelerator to “see” the quarks and gluons, and it takes some form of microscope to see the atomic nuclei. We can describe the world as if it consisted of atoms or molecules with certain properties, and we can use atoms and molecules to build system that observe atoms and molecules. From the point of view of such a system, the underlying level of quarks and gluons does not exist. There might be some level of reality “below” that which we cannot probe by any possible technology. Quarks and gluons might be quasi-particles in some underlying physics. From our “internal” point of view, they are real, from an outsiders view they would just be an “as-if” layer of description.

      “(As an aside, I’m not sure about “locked in” concerns — assuming the computer project works. Wouldn’t such a consciousness have input and output systems that allow perception and communication?)” – Yes, and the human brain project also proposes such experiments, but I think if there is consciousness in such a machine. it would be unethical, an instance of slavery. If we cannot decide if there is consciousness or not, we should apply the principle “in dubio pro reo” and assume that there is, i.e. we should not try to build such a simulation of a complete brain.

      “Both the Turing Halting Problem and Gödel’s Incompleteness Theorems address the inability to calculate certain truths” – see my other comment about hard and soft AI.

      1. I actually think that the physical basis and the absolute speed of the events do not matter. In the thought experiments with the simulated brain, the Chinese simulation, or Leibniz’ mill, the question is always: “where is ‘experience’ in that scenario”.

        True, and it’s a question we haven’t answered so far. If the physical basis does matter, then the answer may be: nowhere. It may be that these simulations simply won’t work. (There is a theme sometimes found in SF that, although you can build an identical human, perfect in every way, all you end up with is a lifeless hunk of meat.)

        What’s kind of exciting is that we will develop technologies that simulate the human brain, and then we will have part of the answer. I just hope I live long enough to see it!

        re the Zombie argument:

        My problem is our world seems to be the way it is because we experience. The argument assumes the Zombie world suddenly springs into existence, and on that count it’s harder to refute. I can imagine a world of robots emulating us, but I can’t see how they got there in the first place. To imagine a Zombie world evolving into one that matches ours without its sentient beings experiencing reality seems utterly incoherent to me.

        “If we could find out more how the perception of consciousness,…”

        Are you familiar with Douglas Hofstadter’s work? He has a theory about mind being a kind of highly complex feedback loop. He sees self-awareness as tied into that feedback system. What you wrote here reminded me a little of some of what he says.

        “…“horizon of accessibility”…”

        I like that term. Such a horizon exists in systems today. My laptop “knows” certain things about itself, but not underlying details any more than I can feel my kidneys working.

        “Consciousness, I think, is the “being real” of that internal observer.”

        The question I have is: Why is there an internal observer at all? What gives rise to that?

        “My point is that the nature of the “hardware” implementing the conscious mind is beyond that horizon, so the mind cannot distinguish if it runs on natural or simulated neurons. As a consequence, the nature of that “hardware” does not matter for the question of the ontological or ethical status of the resulting consciousness!”

        Assuming mind can be “run” on a mechanical system, that seems clear. But I think that’s a huge (and unproven thus far) assumption.

        AI seems to be a hot topic in the SF world right now (having, I think, well exhausted most of the older themes). A common idea in parallel with AI is human mind uploading into a machine. (Obviously this assumes mechanical mind is possible.) That raises the interesting issue of duplicates of yourself.

        My point is, if mechanical minds are possible, it opens a number of very interesting questions.

        “My idea is that our own consciousness is an internal reflexive observer of this kind, emulated by the neuronal processes in our brains. If you simulate these processes in a computer system, such an internal observer would be present too.”

        Assuming mind can be “run” on a mechanical system. How does this idea of yours tie in with the idea that mind is not algorithmic? What kind of system can run such a mind? You seem to be, in this argument, saying that mind is a mechanistic process that can be replicated.

        “In a way our physical world may be viewed as a stack of emulations as well.”

        I’m not sure emulations is the word I’d chose here, but I think I take your point. There are many layers to any system, and certain things emerge only at certain levels. A clock is made of three kinds of Fermion and an assortment of bosons. Or, it’s made of electrons and hadrons. It’s also made of several types of atoms. Or it’s made of compounds (molecules). It’s also made of a bunch of gears (which may be comprised of smaller pieces). Some clocks have sub-assemblies used across different models of clock. But only the top assembly tells the time.

        This still begs the question of why our brain (a top assembly) has a self-aware observer capable of experience. It’s possible that a sufficiently complex system of the right nature creates a mind (crudely similar to how the right system creates a time-telling clock), but we don’t know if that’s true.

        “i.e. we should not try to build such a simulation of a complete brain.”

        I see your point, but we’ll have to disagree on this one. Historically, it’s just not how humans operate — we will always open Pandora’s Box! More to the point, I think it’s too important to our own attempts to understand consciousness for us not to attempt it. I’m not sure it’s possible to understand consciousness without such an attempt. And, personally, I’ve always sided with Prometheus and Dr. Frankenstein. 🙂

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: