A New Paradigm for Artificial Intelligence

File:Banksy 28 October installment from "Better Out Than In" New York City residency.jpg

Consider an Artificial Intelligence researcher, let’s call her Alice, programming an AI system – let’s call it Bob, the robot*. Alice has written a wonderful program so that Bob can carry out instructions she is giving him (it? – -well, let us say “him” for convenience) in English and answer her questions.

However, again and again, there are situations when Bob simply does not understand Alice’s instructions or questions. If she asks him if she will need an umbrella that day he can find out about the weather in the place where their lab is and answer her something like “yes, we will get rain.” If she asks for her umbrella, he will bring it. But there are always those situations when he simply does not know what to do. When she asks him for tea, he does not know what that is. When she tries to show or explain it to him, he does not understand her. She has to sit down at her computer. There, she has a programming-interface for Bob. She can change Bob’s programs and download the new version into him so he will get a new skill. He only knows the things she has programmed him to do.

Bob’s program is limited at each time. His knowledge has a certain reach, a set of situations, commands, questions for which he will produce reactions that fit, but the world always has more situations not provided for in the program. There are always things he does not perceive and things he does not know how to do. Bob’s knowledge is represented in some logical knowledge representation language. She has also tried her luck with simulated neural networks. Alice is patching up his knowledge again and again to fill the gaps and extend its reach, but it always remains incomplete.

Bob is always limited. However, what Alice is noticing one day is that she is looking at Bob in the wrong way. She notices that she herself had a limited look at him and that is what is limiting him.

She is looking at him as a robot with cameras and microphones and hands and legs and a computer, his “brain”, but in her description of him, she is leaving out the programming interface. Each description of him contains of the description of the hardware (which does not change) and the software and “knowledge base” she has programmed. Each such description is limited and has a limited reach.

But if she adds the programming interface to the description, she is suddenly no longer able to describe him completely. She can change his program. The Bob with the programming interface is no longer fixed. He can be changed, from Bob 1.0 to Bob 1.1 and so on. And if she creates a complete description of one of these versions, say Bob 1.9, she can change the program in such a way that that description does not fit. The Bob without the programming interface, e.g. Bob 1.9, is limited and fixed. The Bob including the programming interface, on the other hand, is an open system for which a complete description is not possible. And although each version can be completely described and the process leading from one version to the next can be completely described (it is just a sequence of keystrokes on her keyboard), there is no general Bob-theory that can describe all possible versions.

She has programmed some knowledge representation language into him but that turned out to be limited. She has repeatedly extended that language. Now she notices that there was a universal knowledge representation language present inside Bob all the time: the general purpose programming language she used to program him. This language is extensible and she could use it to program any procedure and any new feature of the knowledge representation language that she could imagine. This language obviously is universal.

The Bob with programming interface has no limits. If she wants him to do something he does not know how to do, she can enter a sequence of commands to change his program. She had always been thinking that in order to make Bob intelligent, she would have to find the one special intelligent structure, but she only added patches. She sometimes could integrate some special patches into one more general one, but she never arrived at the intelligent structure she was dreaming of. Bob remained special, in each of his versions.

Now she understands: there is no general intelligent structure. Every fixed structure she can program that she can completely describe will be a special structure with a limited reach. To become intelligent, Bob must be changeable, he must be programmable. He becomes universal by becoming able to change. Bob with his programming interface is universal in this sense. For each task Alice wants to “teach” to him, there is a sequence of commands she can use to change his program, so that he can become able to perform that task. In that sense, Bob is an open system.

However, he is not really intelligent because she has to explicitly program everything into him. But the important thing to understand is that in order to become intelligent, Bob must be changeable. There is no fixed structure, no algorithm or fixed system of modules that makes him intelligent. Each fixed system contains only a limited amount of information from which Bob can derive some consequences, but it has a limited reach. In reprogramming, new information is added to the system, opening up new parts of the world, i.e. extending his reach.

In order to become intelligent, Bobs programming interface must be somehow integrated into himself so that he will become able to reprogram himself. Some part of him will do that reprogramming, producing new information or taking information from the environment and integrating it into his current structure. Then, what he will do in new situations will depend on that new information, thus changing his behavior. The part of the program that does the reprogramming will itself be reprogrammed, learning new things.

There is some risk in this. Bob might make mistakes and do and “think” nonsense. He might correct those mistakes and improve himself, but there is a possibility for him to become mad, in a sense. A system that is unable to make mistakes must be restricted so much that it will be a limited system with a closed reach again. Alice remembers that she once learned that any program written to find mistakes in programs will be incomplete, so Bob cannot be error-free in principle. That can be proven. Nobody is perfect, even a robot.

Alice now understands that she and her AI-colleagues misunderstood the task all of the time. They were searching for the structure of intelligence, but there is no such structure. The self-reprogramming Bob might move out of the scope of applicability of any description of him. He would be an artificial system for which a complete description is not possible because he can break out of any such description. You can show him the current description and he can do otherwise. He would not be Bob 2.0, or Bob 3.6, or only for a moment and not completely. It would not make sense to assign any fixed number to him. It would be impossible to understand him completely although in hindsight, each process in his computer could be understood. Unlike Bob 1.5 and Bob 3.6, he would have a kind of biography, a history of how he came to be what he is at a certain moment, because he does not have a fixed program again.

Bob would develop new patches of knowledge himself, sometimes integrating some more special ones into new more general ones. His knowledge, as a result, would evolve in time. The more primitive his initial knowledge is, the longer it would take him to solve a task. So for practical reasons one would have to start with some initial structure. It might be possible for any such structure to start with something even simpler, and then let Bob learn, but he would need a lot more time if that Version 0.1 was too primitive. So Alice understands that her new task is not to find “the structure of intelligence”, but an initial structure that can evolve and do so in a reasonable time. From a programmer, she has to turn into a teacher. During that process of learning or development, this initial structure might disappear completely, being replaced by other structures. So there is no general structure of intelligence.

Alice now understands that the mistake of AI researchers all the time was that they were assuming that there must be a fixed structure, and the task was then to find that structure. What they found were a lot of clever special solutions, able to play chess or answer questions about the weather or winning a quiz show, but they never managed to build something intelligent because they started from a wrong basic assumption. Their own theory had had a limited reach. Being good scientists, they had thought that everything had to work according to fixed laws, everything including intelligence. That had been the mistake. Intelligent systems change their laws and that ability – let us call it “creativity” – is at the core of intelligence. That is why the AI researchers never found it. That is why the analytic philosophers behind them were going round in circles, without ever arriving anywhere.

For the time being, this researcher Alice is just as hypothetical as is that robot Bob. The existence of human beings shows the possibility of intelligence and thus hints at the possibility of artificial intelligent systems, so something like Bob might become reality. Of course, we might also ask whether we want such intelligent systems to be around, systems that cannot be described completely and that may develop in unpredictable ways. Such systems might turn out to be useless because they will go their own ways. Will it be possible to steer them? We might ask whether such systems might develop properties that turn them into ethical subjects, so that using them would be comparable to slavery. Like in other areas of technology, new technical possibilities come along with new kinds of ethical questions, and all fixed ethical theories turn out to be incomplete, so we will have to think about those questions. However, doing so is beyond the reach of this particular article.

acknowledgements: Note that the ideas presented in this article are partially derived from, or inspired by, the works of and discussions with Kurt Ammon.

                                 

*  For general information on “Alice” and “Bob”, look here. “Alice” and “Bob” where introduced as placeholder names instead of “A” and “B” in a cryptography paper in 1978. They are normally used to denote the sender and receiver in a communication. I am taking some freedom here with these names in comparision to their traditional uses in information theory, cryptography or physics papers, but Alice, as the programmer, is a sender of information and Bob, being programmed, is the receiver, if you take the programming interface of Bob as a communication channel (and the crucial point here is to integrate that (meta-)communication channel into Bob’s normal communication with his environment).

(The picture, showing a work of the famous street artist Banksy, whom I admire, is from http://commons.wikimedia.org/wiki/File:Banksy_28_October_installment_from_%22Better_Out_Than_In%22_New_York_City_residency.jpg.)

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: