Some people are believing in the coming of something they call “The Singularity”, and they believe it is only a few decades or even years away. For some (who may be viewed as belonging to a larger current of ideas sometimes referred to as the “transhumanist” movement), this seems to have some quasi-religious significance.
Instead of explaining what the “Singularity” is, let me just cite from the current version of the Wikipedia article on the topic (https://en.wikipedia.org/wiki/Technological_singularity):
The technological singularity is a hypothetical event in which an upgradable intelligent agent (such as a computer running software-based artificial general intelligence) enters a ‘runaway reaction’ of self-improvement cycles, with each new and more intelligent generation appearing more and more rapidly, causing an intelligence explosion and resulting in a powerful superintelligence whose cognitive abilities could be, qualitatively, as far above humans’ as human intelligence is above ape intelligence.[…].
The “Singularity” theory states that we are going to create artificial intelligent systems. At some point we will be creating an AI system (let us call it S0) that is more intelligent than we are. This system is then going to create a system S1 that is even more intelligent, and so on. We humans, with our limited intelligence, would not have been able to create S1 directly. What happens then is unpredictable for us. There is something like an event horizon of history through which we cannot see. Like the “singularity” behind the event horizon of a black hole, there is a singularity of history behind these events. Some people expect the eradication of human beings beyond that point, or their absorption into some wonderful super intelligence, or the creation of an artificial paradise, complete with artificial afterlives. All kinds of heavenly or hellish futures can be imagined here and according to some people, this development is inevitable.
“Sadly”, I am going to argue here that this “Singularity” is not going to materialize. We will still have to face our problems ourselves and to solve them ourselves. Simply speaking, it is not possible.
In the previous article, I have introduced a distinction between a) “formalistic” and b) “creativistic” theories of the mind (and I further distinguished possibility b) indto b-1) (dualist theories) and b-2) (physicalist theories).
Let us first assume that the formalists are right. This would mean that there is a fixed structure of human intelligence. There are fixed laws according to which human thinking is working. There is an algorithm and we can find it. Now, if there is such an algorithm, then there is no reason why there should not be a more powerful one. So, we can build the system S0, and then S0 will build S1, and so on. Bang, the Singularity is there.
The problem is: if human cognition has a fixed formal structure, everything it can develop into is already fixed from the beginning. In fact, an algorithm cannot describe its own extension in principle. It cannot even describe itself completely. A formal system cannot contain a complete reference to itself and it cannot describe its own extension. With other words: if we are describable in terms of an unchanging algorithm, one of the things that is out of our reach in principle would be the creation of a system that is more intelligent than we are. And such a system would not be capable to create an even more intelligent system, etc.
You could try to build an artificial process of evolution in which an artificial intelligent system is undergoing random mutations. If the randomness comes from a pseudo-random generator (an algorithm) the result is an algorithm, so it cannot be self-developing. If it is some physical source of true randomness, the result would be no formalistic system, but a kind of b-2-Architcture (physical creativistic system).
If b-1 is correct, we cannot build an intelligent system. The singularity would amount to creating an artificial deity.
If b-2 is correct, there is no fixed architecture for intelligence. Every intelligent system (not just the one that would evolve into the singularity but also the neighbor’s baby next door) is a “seed-intelligence” that develops by turning itself into more advanced systems. However, all the advances are special, because the more advanced and sophisticated a cognitive structure is, the more specialized it must be. Artificial system S0 could not produce a super-intelligent architecture S1 because such an architecture does not exist. There is no general architecture of intelligence. There are only ever different special cognitive processes which can be thought of as different analytical spaces.
And there is reason to think that there is a limit to how intelligent a system can become. I have explained why this is so here. To sum up the argument given there: basically, large amounts of data can only be processed by algorithms which restrict which part of the data is combine with which other part. Otherwise we would run into a combinatorial explosion. Since such algorithms restrict which data is combined, they are limited in principle in their ability to discover anything new. Discovering new structures therefore requires working with only small amounts of information at any given time. The narrowness of human attention is not a flaw of human intelligence but a basic precondition of any intelligence. Programs capable of processing large amounts of data are necessarily special. Programs that are more general can only work on small amounts of information at a time or else they would run into intractable combinatorial explosions.
As a result, we see that the “Singularity” is not possible. The idea rests on the formalistic approach but lacks the understanding that formal systems cannot be self-evolving in principle and that, since efficiency relies on specialization, a super-intelligence is impossible in principle. Just like there is a trade-off between expliciteness and generality, there is also a trade-off between generality and efficiency, in principle (for the same reason, current hype on “big data” etc. is highly exagerated). So there is no reason why an artificial intelligent system could be significantly more intelligent than a group of people using normal computers.
Indeed, we cannot predict what is going to happen. The future cannot be predicted. As a result, nature has come up with mechanisms of adaption: evolution and creativity. We should look at the real dangers of our world instead of fearing – or hoping for – some Sci-Fi-miracles.
(The picture, meant to be symbolizing this rather ideological topic here, is a picture of the sun from https://commons.wikimedia.org/wiki/File:The_sun1.jpg).