A Definition of Intelligence

File:Kind mit Bauklötzen 1853.jpg

What is general intelligence, no matter if artificial or natural? I would suggest the following definition:

General Intelligence is the ability of a system to generate arbitrary new knowledge. New knowledge is generated by such a system by interaction with an environment. In this interaction, information from the environment is integrated into the system.

This ability of creating new knowledge can also be viewed as general creativity.

If a system is able to produce new knowledge then its informational description (as a piece of software, so to speak) is changing since it is being extended (this may be thought of as (re-)programming. The extensions can be viewed as new programs added to the system or as modifications of existing programs (algorithms). Any aspect of such a system can be modified, so it does not have any unchangeable fixed structure (although it has a definite structure at any specific time and starts from some “developmental core”). It can develop new knowledge about how to process and how to represent information

So such a system would have a number of strange properties (strange from the point of view of science):

  • It could not be described or understood completely, since for any finite description of it in terms of algorithms or formal theories, new structures could arise in the system that are not covered by that previous description.
  • It would, therefore, not be describable as a system developing according to fixed laws. The systems could develop new ways of processing information (new “cognitive skills” or “ways of thinking”). There could be no “science” of such a system in the sense of a complete description in terms of a fixed set of laws.
  • A discipline dealing with such systems would turn out to be more akin to the humanities, where scholars are always dealing with incomplete descriptions (and the reason why the humanities are not reducible to science would turn out to be the changeability of the human mind).
  • Since the system develops by interaction with some “outside” world, it would need an interface to (some part of) reality. In the case of humans, this is the human body, including the sense organs. The kind of intelligence that develops in an artificial system would partially depend on what kind of interface to an outside world it has and what this world consists of.
  • In the case of artificial generally intelligent systems, it would probably not be possible to control their development. This opens up ethical and political questions that will have to be dealt with in separate articles. For example, if such a system would develop autonomy as a subject, controlling it might be unethical (the question here is: would any such system develop into something that has to be given a human-like status, so “using” or “controlling” it would be slavery). However, such questions are beyond the scope of this article.

The development of a generally intelligent system has to start with some “developmental core”. If the human mind starts its development from a pre-formed core of innate knowledge, the question arises what is the minimum set of structures that have to be contained in such a core. The hypothesis has been put forward that such a minimal core does, in a sense, not exist. [1] I would like to formulate this hypothesis in the following form:

  • The human mind (or any artificial general intelligence) starts its development from an initial core that can be viewed as an initial set of pieces of knowledge. This includes initial knowledge that can be used to extend the knowledge of the system.
  • In order to extend its knowledge, the system interacts with its environment. As a result of this interaction, information from the environment is integrated into the system, i.e. new knowledge is generated.
  • If the initial core contains more sophisticated structures, its information processing can be more efficient. If it contains less sophisticated information, its information processing will be less efficient, i.e. there is a principle of economy here.[2]
  • Any structure that is contained in the initial core can be generated or found by means of a learning process by a less sophisticated core not containing that structure. So it should be possible to start with a more primitive core that would, however, be less efficient.
  • With regard to the human mind or evolutionary precursors of it: any structure in the core that could be discovered by genetic (i.e. biological) evolution could also be discovered by a learning process starting with a more primitive core. However, below a certain level of preformed structures, the efficiency of the core would be so low that the available time would be to short and the animal would effectively be mentally handicapped.[3] So there should be a minimum core we can expect to exist that is genetically determined.
  • In an artificial system, any structure that has to be programmed into the core in advance could also be discovered in a learning process starting with a more primitive core. However, below some limit, the efficiency of such a core would be so low that the resources available (like time) would no longer be sufficient.

These hypotheses lead to the following ideas about the evolution of general intelligence: The brains of animals have some fixed, innate structures and some learnt parts that are due to plasticity. In some simple animals, everything is fixed and there is no learning.

In some animals, the learned part of the structure may be viewed as forming a thin layer around the genetically determined part. It provides for fine-tuning, making the ecological niche a bit wider and the adaption to it more robust, since some variation of the environment can then be tolerated. However, there is a possibility for evolution to “thicken” this learned layer of the structure. The result is an animal that is less specialized and can adapt to a wider range of conditions without changing its genes. This is likely to happen in animals that have highly developed sense organs and some kind of manipulator organs that allow for a broad range of interaction with the environment (think of the hands of primates).

In the further course of evolution, the genetically determined behavioral core might even shrink, while the learned parts of behavior become dominant. This seems to be the path evolution has taken in higher primates and eventually humans.

So I think that this development towards humans is characterized not by the development of a highly sophisticated cognitive architecture, a complex system of different modules for different types of thinking, but by a reduction of fixed structures. Instead of the development of a highly complex architecture of cognition, I think what happened is that the unspecialized parts of the brain expanded, increasing processing power and increasing the number and connectivity of neurons that could be combined into all kinds of different structures.

If the principle that everything that can be found by evolution can in principle also be found by learning then it is indeed implausible that the genetically determined core contains sophisticated structures. It will only contain structures that are too costly or time-consuming to learn and that are necessary or highly advantageous for survival, without restricting general intelligence. Specialized sophisticated structures would even be reduced.

So once evolution goes this way, the organism would stop to adapt to challenges by means of genetic evolution. If the plasticity, i.e. the capacity of such brain areas for learning, is high enough to develop new cognitive structures by learning, there would be no evolutionary pressure to develop such structures by means of genetic evolution, given that not having such structures is not endangering the animal. On the other hand, the development of specialized structures could reduce the plasticity, i.e. the potential of fast adaption. “Hard-wired”, genetically pre-determined structures would, however, develop (or be retained from earlier ancestral species) in cases where these structures are very useful and where developing such structures through a process of learning would take too long (See the example of the processing of primary visual information given in footnote [3]).

In other instances, we clearly have learnt structures. It is possible that the “motoric program” for walking is pre-formed and that the child does not have to learn walking. It is possible that instead, the structure implementing this program ripens during a certain part of development. However, it is also possible that the child actually has to learn most of this process. We may look at the example of bike riding. Bicycles did not exist before the 19th century, so bike riding is clearly a skill that we learn. There are no genes for this skill in our genome. Likewise, we have to learn other motoric skills and we learn a lot of our perception skills (i.e. recognizing specific kinds of objects). We can recognize a house or a car, and that perceptive skill is obviously learnt, not genetically preformed.

In a similar way, we might learn new mental skills. This means that the different kinds of thought processes we employ in thinking could, to a very large extent, be learnt and acquired, rather than genetically inherited. This would mean that just like the neural networks that implement a skill like bike riding or recognizing a car, those we are using to process language or to make certain types of logical inferences, for example, are learnt. If this is so, a lot of the blue prints of the architecture of our minds are not in our genes but in our cultures, and the part that is genetically determined is, to a large extent, just a developmental core, a take-off-point or scaffold for cognitive development.

(The picture is from https://commons.wikimedia.org/wiki/File:Kind_mit_Baukl%C3%B6tzen_1853.jpg.)

___________________

[1] Ammon, Kurt: “The Automatic Development of Concepts and Methods“, Doctoral Dissertation, University of Hamburg, 1987. Here, Ammon puts forward the hypotheses that the core of creative systems is empty.

[2] See Ammon, 1987.

[3] An example for a structure that could in principle be found by a learning process but is probably pre-formed in the genetically determined initial structure of the brain is our ability of two- and three-dimensional vision. There might be a need to fine-tune it in learning processes, but a lot of the low-level visual processing is already in place from the beginning (or as soon as the corresponding brain structures have ripened). These structures could be discovered in a learning process by interacting with an environment that is regular enough, but this would take a long time and would be dangerous. Neither the processing of sense data into two dimensional images or into three dimensional representations has to be there initially from a theoretical point of view and – against Kant’s conviction that a “form of intuition” (“Anschauungsform”) of space is a necessary prerequisite of learning – it is not necessary that a notion of space would already be there, but building up such elementary structures would take too much time.

Advertisements

6 comments

  1. […] my previous article, I have argued that cognition does not have a fixed, unchanging structure. Instead, I think it […]

  2. A very interesting post, nannus, but I would like to pick up on a couple of points.

    You say that an intelligent system necessarily has to interact with its environment to demonstrate its intelligence. While practically, yes, this is the case for all animals, I’m not sure that you can assert it in general. Imagine a prisoner held in a cell in solitary confinement. That prisoner could still be creative, perhaps imagining pictures, words or songs in the mind.

    Secondly, you say that an intelligent system cannot be described by fixed laws. While the high-level operation of the mind may be quite malleable, rewiring neural pathways, there must surely be an underlying set of laws – the laws of physics at the most basic level, and probably some simple laws that describe how neurons and networks operate.

    Ultimately, what you are criticizing is the type of expert system that can play chess or answer simple questions based on fixed algorithms. What about neural networks that are designed to evolve towards a solution?

    1. Keeping a baby in in solitary confinement is of course an experiment we cannot make, but I doubt that such a baby would develop very far. It would be restricted to a world that is static with the exception of what it is doing itself.
      A prisoner who has been exposed to the world before could indeed imagine things or invent things. New information could be generated by random processes in the brain or body of the prisoner. (In practice, he would probably stillbe connected to the outside world in some way). The knowledge he has already might also be enough to keep him going for decades. The philosophical poems of Thomaso Campanella, for example, are remarkable and he wrote them during decade-long imprisonment, but he was able to talk to the prison-ward and have occasional visitors (including the people who smuggled those poems out and eventually published them). In the very long run, you would need inspiration from the outside (where the “outside” in this sense might be random processes inside the brain itself, e.g. while dreaming).
      I think an intelligent system is a creative system. A creative system could develop out of the bounds of any formal description we could give for it. If such a system is restricted to its own output, I think it would “degenerate” into an algorithm. You could produce a complete formal theory of it and its inputs/outputs.
      There are physical laws and there is a level of neurology that is also law-like in case of the brain. However, you cannot derive what the brain is doing from neurology. Your thoughts are produced by the neurons, but having a detailed knowledge of neurology would not enable us to predict any of the content that people are thinking, because most of the information in that content does not come from the underlying neurology, but enters through the senses. Some of this information is stored in memory. If stored information influences how further information is being processes, then it becomes part of the “laws” of the system. The system is being programmed that way. You cannot make a distinction between laws and content once and for all in a programmable system. For example, everything happening inside your computer is describable in terms of physical laws (electronics). However, you can change the computer’s behavior by entering some new software into it, i.e. you can program it. The laws of electronics will not tell you much about what the computer is doing because what it is doing depends mainly on the software that was entered into it. You can still produce a complete formal theory of it in the sense that it has a finite memory, so it only has a finite number of states and is thus a finite automaton. However, this is impractical. And if it is possible to extend the memory (e.g. by connecting the machine to the internet and storing some information there), you can no longer do that. Every formal theory of such a system you can produce is incomplete.
      You can think of a new skill that you learn as a modification of the system, e.g. if you learn to ride a bike or drive a car. In the same way, you may learn a new thinking skill, e.g. how to add numbers. You can think of this as a process of programming, i.e. information that enters from the outside is integrated into how the system works. New ways of thinking are developed, not necessarily by genetic evolution but by culture, by integrating information coming through the senses into how the system works. The system is then programmable. In such a system, it is not possible to distinguish content and laws once and for all. Content can become law.
      Think of the knowledge somebody has, plus the neurological machinery underlying it, as a formal theory. Let us call that “A”. You can derive statements or predictions about the world from this theory. The predictions will “cover” the world only partially, the world has more properties than we can predict from our knowledge, so our knowledge is always incomplete. Suppose now we are encountering something surprising, i.e. some event or thing not predicted by our current knowledge. We store information about this new event in memory. The previous knowledge A plus that new bit of information together gives a new formal theory B. We might be able to derive more facts about the world in B since it contains more information than A (the new fact was not derivable in A, it was a surprise for A). If we apply the new information in some process of perception or thought, it can be viewed as part of the software, so you can view B as a modified “cognitive software”. The system has been reprogrammed and modified. The formal theory describing it before was incomplete and now you have an extension.
      Every process happening, inside A and in the transition from A to B happens according to the laws of physics or according to the laws of neurology or electronics, but as long as you have memory capacity to store new information, you can always add new information not derivable from what you had before, so every formal description of the system is incomplete and the system can be changed completely. If it can store information, it can be reprogrammed. If it can be reprogrammed, it does not have fixed laws.
      This applies to any kind of system able to process information and to store information so I don’t think it makes a difference if we are using a traditional programming language or a neuronal network.

      1. Your language here indicates that a brain is like a computer in many respects – of being effectively programmable, or able to change its memory or software. Do you think that a computer is *in principle* capable of behaving like a brain, or do you see a reason why this is impossible?

        1. I think this is indeed possible. Generally, I do not think that being made up of neurons is a precondition of intelligence. More specifically, I can imagine a simulation (or rather: emulation) of the structures of a human brain in a computer system. One difference between neurons and electronic systems as used in computers is that neurons are not digital and, as a result, exhibit a certain degree of randomness, but I think in such an emulation, that could be emulated well enough not to make a difference.

          However, I think this would pose severe ethical problems, see http://asifoscope.org/2012/11/20/an-open-letter-to-the-human-brain-project/

          1. I agree on both points.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: