Over the last years, we are beginning to see systems that are claimed to be “artificial intelligence” (AI) systems, like, for example, speech recognition systems. Now some people are expressing fears that AI systems will eventually become more intelligent than we are and then take over. I don’t think this is a realistic scenario. I think it is based on a wrong conception of what intelligence is and what artificial intelligence could be.
The “AI-systems” that exist are collections of special algorithms, programmed to do special tasks. In that respect, these systems differ in no way from other software we are using. That these systems do things that humans also do, like “understanding” spoken language, i.e. producing more or less useful responses to spoken commands and questions, does not make them intelligent. Building a machine that does things formerly done by people does not make these machines human-like or intelligent. E.g., a bread baking machine can knead dough, something that for thousands of years has been done only by people. Does that make that machine intelligent? The example might seem ridiculous, but there is no significant difference: both systems, the speech recognition system that you can ask for the weather and the kneading machine that makes bread, are doing special tasks with special purpose methods.
The problem that the AI research community is having, since it emerged in the 1950s and 1960s, is that it does not have a proper concept of what intelligence actually is. In order to tackle the question, I will first propose such a concept of intelligence, and then argue why there is a limit to how intelligent a system can become.
My proposal is that what is at the core of intelligence is the ability to generate new knowledge. Such knowledge is new with respect to the knowledge one had before, i.e. it cannot be derived from the prior knowledge. To be intelligent, a system must be creative.
With “knowledge” I mean any information that is applied in processing information. Note that I am not using the term “knowledge” here in the sense of “justified true belief” that you may find in parts of philosophy. Instead, it is a concept similar to that of “program” or “software”.
Any system that is programmed by an outside programmer and then has a fixed and unchanging program is, in this sense, not intelligent. It contains and applies knowledge but it does not generate new knowledge. So to be intelligent in this sense, an artificial system would have to be self-programming.
In order to generate something new that could not have been derived from pre-existing knowledge, a system must take up some new information from the environment (where the “environment” might also include an information-producing mechanism inside the system, e.g. a random generator). It can then try to apply this information and use it as part of a program. Initially, the system does not know what to do with the new bit of information. It can experiment with it by combining it with different other pieces of information previously existing in the system or just obtained, but it would, typically, not have any guideline about how to do it. If the system uses such a guideline, i.e. an existing complex algorithm that restricts what can be done with the new information, it might overlook important ways of using it. If it does not use such a guideline, it can only tentatively combine the new information with small amounts of other information.
The reason for this is that if you put too much information into the process at any time, you are running the risk of producing a combinatorial explosion. This needs some explanation.
If a computer processes large amounts of data, it will normally do so using an algorithm that tells it, at any time, which data to combine with which other in data in order to calculate some intermediate result. Using the algorithm restricts the ways you combine the input data in order to produce the desired result, and you know beforehand which calculations you have to apply to which data at any given step of the calculation. This way, very large amounts of data can be processes in a short time. Such processes obviously also happen in our brains. In the processing of visual data in the brain, for example, large amounts of primary information are processed in a very short time. This is possible by using a neural network whose structure is relatively static and which can be viewed as the implementation of a static algorithm. Such a system is able to process large amounts of data quickly. It is, however, not intelligent in the sense that it can generate new knowledge.
In a situation where we do not know an efficient process or algorithm yet, i.e. we do not possess appropriate knowledge yet, we have to try applying different existing methods (algorithms, rules of inference, bits of knowledge) to some data, and see if the result is useful. Each step we make in such a process results in some additional information that we can combine with other information again. If we do not restrict the amount of information we are looking at in such a process at any given time, it will grow very quickly. We may apply some strategy, i.e. some existing algorithm or knowledge, but then we risk not finding some new possibilities. But if we do not do that, the number of possible moves will grow very quickly. We will get a combinatorial explosion.
In order to avoid combinatorial explosions, we have to restrict the amount of information we are using at any given time to a small amount. And there is no way to avoid this. A creative process that has a chance to find new and novel knowledge can only be a slow process that processes only small amounts of information at any given time. In order to process large amounts of information, you need an algorithm and that algorithm is always constraining what you do with the information. Essentially, an algorithm is a system of constraints on what the computational system will do. As a result, each algorithm has only a limited reach. Some algorithms might enable a system to process large amounts of information, but only by restricting the processing to a small subset of what is possible.
So, creative processes are slow by nature. They cannot be sped up deliberately by adding more processing power. It might be possible to speed up creative processes by using a faster processor, but there is a limit on how fast computers can get, and the processors of our computes have already ceased to become faster some time ago. You might be able to do some of the “thinking” in parallel, by splitting it up among several processors, but if you let them communicate too much, you are practically creating a combinatorial explosion again.
Moreover, there is no way of guaranteeing the correctness of the results. Creative processes are fallible by nature. If we had a way of knowing in advance what is correct, we could derive new knowledge of any kind, but new knowledge is underivable from existing knowledge. If you restrict processes of information processing to operations that are guarantee to be correct, you restrict their ability to find new things. And if you create a system consisting of a “team” of interconnected processes, they might misunderstand each other, and there is no way of preventing that, except by making them part of a fixed algorithm, i.e. by creating a fixed protocol of communication that restricts what these processes can do, and that would again destroy the creativity.
It is not clear how intelligent a system can become and where humans or teams of humans are with respect to the possible maximum intelligence. However, 0ur brains are able to process large amounts of information simultaneously, so the processing power of our networks of neurons is not the limiting factor here. The limit of “7 items at a time, plus or minus two” that seems to exist according to cognitive psychology is not a limit of the neuronal “technology” our brains are using, it seems to be a limit that is there as a matter of principle. Put more pieces into the game at any given time, and the number of ways you can combine them in explodes exponentially. So maybe artificial intelligent systems, should they ever be created, might be thinking faster than we do, but they would not really be much more intelligent.
(The picture is a modified version of http://commons.wikimedia.org/wiki/File:Maching_Reading_Robot_Auto-Text_to_Knowledge.jpg (I have removed the text).)