Is “The Singularity” Near?

File:The sun1.jpg

Some people are believing in the coming of something they call “The Singularity”, and they believe it is only a few decades or even years away. For some (who may be viewed as belonging to a larger current of ideas sometimes referred to as the “transhumanist” movement), this seems to have some quasi-religious significance.

Instead of explaining what the “Singularity” is, let me just cite from the current version of the Wikipedia article on the topic (https://en.wikipedia.org/wiki/Technological_singularity):

The technological singularity is a hypothetical event in which an upgradable intelligent agent (such as a computer running software-based artificial general intelligence) enters a ‘runaway reaction’ of self-improvement cycles, with each new and more intelligent generation appearing more and more rapidly, causing an intelligence explosion and resulting in a powerful superintelligence whose cognitive abilities could be, qualitatively, as far above humans’ as human intelligence is above ape intelligence.[…].

The “Singularity” theory states that we are going to create artificial intelligent systems. At some point we will be creating an AI system (let us call it S0) that is more intelligent than we are. This system is then going to create a system S1 that is even more intelligent, and so on. We humans, with our limited intelligence, would not have been able to create S1 directly. What happens then is unpredictable for us. There is something like an event horizon of history through which we cannot see. Like the “singularity” behind the event horizon of a black hole, there is a singularity of history behind these events. Some people expect the eradication of human beings beyond that point, or their absorption into some wonderful super intelligence, or the creation of an artificial paradise, complete with artificial afterlives. All kinds of heavenly or hellish futures can be imagined here and according to some people, this development is inevitable.

“Sadly”, I am going to argue here that this “Singularity” is not going to materialize. We will still have to face our problems ourselves and to solve them ourselves. Simply speaking, it is not possible.

In the previous article, I have introduced a distinction between a) “formalistic” and b) “creativistic” theories of the mind (and I further distinguished possibility b) indto b-1) (dualist theories) and b-2) (physicalist theories).

Let us first assume that the formalists are right. This would mean that there is a fixed structure of human intelligence. There are fixed laws according to which human thinking is working. There is an algorithm and we can find it. Now, if there is such an algorithm, then there is no reason why there should not be a more powerful one. So, we can build the system S0, and then S0 will build S1, and so on. Bang, the Singularity is there.

The problem is: if human cognition has a fixed formal structure, everything it can develop into is already fixed from the beginning. In fact, an algorithm cannot describe its own extension in principle. It cannot even describe itself completely. A formal system cannot contain a complete reference to itself and it cannot describe its own extension. With other words: if we are describable in terms of an unchanging algorithm, one of the things that is out of our reach in principle would be the creation of a system that is more intelligent than we are. And such a system would not be capable to create an even more intelligent system, etc.

You could try to build an artificial process of evolution in which an artificial intelligent system is undergoing random mutations. If the randomness comes from a pseudo-random generator (an algorithm) the result is an algorithm, so it cannot be self-developing. If it is some physical source of true randomness, the result would be no formalistic system, but a kind of b-2-Architcture (physical creativistic system).

If b-1 is correct, we cannot build an intelligent system. The singularity would amount to creating an artificial deity.

If b-2 is correct, there is no fixed architecture for intelligence. Every intelligent system (not just the one that would evolve into the singularity but also the neighbor’s baby next door) is a “seed-intelligence” that develops by turning itself into more advanced systems. However, all the advances are special, because the more advanced and sophisticated a cognitive structure is, the more specialized it must be. Artificial system S0 could not produce a super-intelligent architecture S1 because such an architecture does not exist. There is no general architecture of intelligence. There are only ever different special cognitive processes which can be thought of as different analytical spaces.

And there is reason to think that there is a limit to how intelligent a system can become. I have explained why this is so here. To sum up the argument given there: basically, large amounts of data can only be processed by algorithms which restrict which part of the data is combine with which other part. Otherwise we would run into a combinatorial explosion. Since such algorithms restrict which data is combined, they are limited in principle in their ability to discover anything new. Discovering new structures therefore requires working with only small amounts of information at any given time. The narrowness of human attention is not a flaw of human intelligence but a basic precondition of any intelligence. Programs capable of processing large amounts of data are necessarily special. Programs that are more general can only work on small amounts of information at a time or else they would run into intractable combinatorial explosions.

As a result, we see that the “Singularity” is not possible. The idea rests on the formalistic approach but lacks the understanding that formal systems cannot be self-evolving in principle and that, since efficiency relies on specialization, a super-intelligence is impossible in principle. Just like there is a trade-off between expliciteness and generality, there is also a trade-off between generality and efficiency, in principle (for the same reason, current hype on “big data” etc. is highly exagerated). So there is no reason why an artificial intelligent system could be significantly more intelligent than a group of people using normal computers.

Indeed, we cannot predict what is going to happen. The future cannot be predicted. As a result, nature has come up with mechanisms of adaption: evolution and creativity. We should look at the real dangers of our world instead of fearing – or hoping for – some Sci-Fi-miracles.

(The picture, meant to be symbolizing this rather ideological topic here, is a picture of the sun from https://commons.wikimedia.org/wiki/File:The_sun1.jpg).

Advertisements

7 comments

  1. Reblogged this on Philosophical Excavations and commented:

    The seed for a second “field season” on this blog, investigating futuristic ideologies and philosophies. Following the archaeologic metaphor of this blog, each current of thinking explored here will form another “field season” of philosophical digging. Field season 1 (on which I am currently concentrating) is about nationalist, racist etc. (mainly German language) philosophy of the 1920s and adjacent times, that paved the way for National Socialism. Field season 2, opened with this article, will be about transhumanism, singularitarianism and other futurist and Sci-Fi-ish directions of thought currently flourishing in some corners.

  2. I am writing this at the risk of ridicule (my family has not done so, but I know they think I’m way off with this one) but – this is why I think self-driving cars aren’t going to be as great as people think.

    Facetious response, I know, but – I firmly believe in the mystery of thinking and people thinking, in particular. Not going to be solved, just getting deeper and deeper and more and more interesting.

    1. Hi Claudia,
      Self-driving cars are driven by algorithms, Algorithms are limited. There are situations where these algorithms will do the wrong thing, so we are going to see some accidents. However, it might be that the rate of accidents might be lower than in human-driven cars. We will see.
      Relatively simple, small brained animals which maybe can still be modelled by algorithms have long been navigating cluttered environments containing other animals, among other things. Examples are animals and birds living in forrests and fish and other animals living in environments like coral reefs. I don’t know how often they bump into each other, but they seem to be able to do this navigation. Thats why I suspect this task requires only a limited amount of intelligence or creativity. However, it may be that people expect too much here.
      To me, the solution to the transport problems of the future is not the self-driving car but a redising of cities and towns into an environment in which cars are not really necessary. You can design a city in such a way that most things you need (shops, your workplace, gardens, parks, playgrounds etc.) are in walking distance. The availability of cheap energy and cheap transportation has driven the developments of our settlements into a direction where different functions are separated. You have a suburb where you have houses with gardens, but no work, no schools, no kindergardens and no workplaces. Then you have city centers with offices, etc. So you need transportation. It is, however, possible to design cities in a different way so that you don’t need a car.
      I have lived in such a place in central Hamburg before. School and Kindergarden, shops, parks, playgrounds etc. where in walking distance or only a few bus stops away. With train stations and bus stops and a network of bike lanes, I actually did not need a car at all while staying there.

      1. Yes, I agree. My own suburb is walkable in a lot of ways, or requires driving, but in distances of about 3-4 miles. My son lives in Washington DC and they take the Metro everywhere. So I agree with you about a multi-function living/working environment. And I hear that younger people are wanting this kind of thing more and more, and not to live out in far-flung suburbs with nothing near. I hope for it.

  3. I agree that the singularity as commonly imagined is unlikely, but for different reasons. Mainly, most singularitarians look at relatively recent trends (such as Moore’s “Law”) and project them infinitely forward, not looking at the resources that enabled those trends, and the limitations of those resources. I also agree that there are limitations on how intelligent a system can become, making godlike AI deities a fantasy, although I’d be cautious in assuming humans are at that limit.

    On the other hand, the assertion that we will never understand intelligence well enough to recreate it strikes me as overly pessimistic. It seems like formal systems are ones we can perfectly model, a standard that no actual physical system can meet. (Digital physical computational systems are explicitly designed to hide their variances, but the idiosyncrasies of the physical substrate often still have to be taken into account for things like floating point calculations or overall process design.) But perfect modelling isn’t required, only effective modeling. Over time, the number of systems we can effectively model seems to be steadily increasing.

    1. “On the other hand, the assertion that we will never understand intelligence well enough to recreate it strikes me as overly pessimistic.”
      That is not what I actually think. I think intelligence can be recreated (and it might turn out to be surprisingly simple). However, there is no fixed architecture of intelligence. An intelligent system would be able to develop out of the scope of any formal or exact description of it, i.e. it could change. There are no fixed laws of intelligence. In that sense we could not understand intelligence: any single process in such a system can be understood (afterwards), but there is no all-encompassing theory of how an intelligent system works. But a “seed” system could be built.

      1. Oops, sorry Nannus, I misunderstood your position. I agree that we wouldn’t be able to predict the actions of a full intelligence, at least not without knowing its initial state (its template or seed state) and every input it took in since then, which in practice would likely never be practical.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: