What is general intelligence, no matter if artificial or natural? I would suggest the following definition:
General Intelligence is the ability of a system to generate arbitrary new knowledge. New knowledge is generated by such a system by interaction with an environment. In this interaction, information from the environment is integrated into the system.
This ability of creating new knowledge can also be viewed as general creativity.
If a system is able to produce new knowledge then its informational description (as a piece of software, so to speak) is changing since it is being extended (this may be thought of as (re-)programming. The extensions can be viewed as new programs added to the system or as modifications of existing programs (algorithms). Any aspect of such a system can be modified, so it does not have any unchangeable fixed structure (although it has a definite structure at any specific time and starts from some “developmental core”). It can develop new knowledge about how to process and how to represent information
So such a system would have a number of strange properties (strange from the point of view of science):
- It could not be described or understood completely, since for any finite description of it in terms of algorithms or formal theories, new structures could arise in the system that are not covered by that previous description.
- It would, therefore, not be describable as a system developing according to fixed laws. The systems could develop new ways of processing information (new “cognitive skills” or “ways of thinking”). There could be no “science” of such a system in the sense of a complete description in terms of a fixed set of laws.
- A discipline dealing with such systems would turn out to be more akin to the humanities, where scholars are always dealing with incomplete descriptions (and the reason why the humanities are not reducible to science would turn out to be the changeability of the human mind).
- Since the system develops by interaction with some “outside” world, it would need an interface to (some part of) reality. In the case of humans, this is the human body, including the sense organs. The kind of intelligence that develops in an artificial system would partially depend on what kind of interface to an outside world it has and what this world consists of.
- In the case of artificial generally intelligent systems, it would probably not be possible to control their development. This opens up ethical and political questions that will have to be dealt with in separate articles. For example, if such a system would develop autonomy as a subject, controlling it might be unethical (the question here is: would any such system develop into something that has to be given a human-like status, so “using” or “controlling” it would be slavery). However, such questions are beyond the scope of this article.
The development of a generally intelligent system has to start with some “developmental core”. If the human mind starts its development from a pre-formed core of innate knowledge, the question arises what is the minimum set of structures that have to be contained in such a core. The hypothesis has been put forward that such a minimal core does, in a sense, not exist.  I would like to formulate this hypothesis in the following form:
- The human mind (or any artificial general intelligence) starts its development from an initial core that can be viewed as an initial set of pieces of knowledge. This includes initial knowledge that can be used to extend the knowledge of the system.
- In order to extend its knowledge, the system interacts with its environment. As a result of this interaction, information from the environment is integrated into the system, i.e. new knowledge is generated.
- If the initial core contains more sophisticated structures, its information processing can be more efficient. If it contains less sophisticated information, its information processing will be less efficient, i.e. there is a principle of economy here.
- Any structure that is contained in the initial core can be generated or found by means of a learning process by a less sophisticated core not containing that structure. So it should be possible to start with a more primitive core that would, however, be less efficient.
- With regard to the human mind or evolutionary precursors of it: any structure in the core that could be discovered by genetic (i.e. biological) evolution could also be discovered by a learning process starting with a more primitive core. However, below a certain level of preformed structures, the efficiency of the core would be so low that the available time would be to short and the animal would effectively be mentally handicapped. So there should be a minimum core we can expect to exist that is genetically determined.
- In an artificial system, any structure that has to be programmed into the core in advance could also be discovered in a learning process starting with a more primitive core. However, below some limit, the efficiency of such a core would be so low that the resources available (like time) would no longer be sufficient.
These hypotheses lead to the following ideas about the evolution of general intelligence: The brains of animals have some fixed, innate structures and some learnt parts that are due to plasticity. In some simple animals, everything is fixed and there is no learning.
In some animals, the learned part of the structure may be viewed as forming a thin layer around the genetically determined part. It provides for fine-tuning, making the ecological niche a bit wider and the adaption to it more robust, since some variation of the environment can then be tolerated. However, there is a possibility for evolution to “thicken” this learned layer of the structure. The result is an animal that is less specialized and can adapt to a wider range of conditions without changing its genes. This is likely to happen in animals that have highly developed sense organs and some kind of manipulator organs that allow for a broad range of interaction with the environment (think of the hands of primates).
In the further course of evolution, the genetically determined behavioral core might even shrink, while the learned parts of behavior become dominant. This seems to be the path evolution has taken in higher primates and eventually humans.
So I think that this development towards humans is characterized not by the development of a highly sophisticated cognitive architecture, a complex system of different modules for different types of thinking, but by a reduction of fixed structures. Instead of the development of a highly complex architecture of cognition, I think what happened is that the unspecialized parts of the brain expanded, increasing processing power and increasing the number and connectivity of neurons that could be combined into all kinds of different structures.
If the principle that everything that can be found by evolution can in principle also be found by learning then it is indeed implausible that the genetically determined core contains sophisticated structures. It will only contain structures that are too costly or time-consuming to learn and that are necessary or highly advantageous for survival, without restricting general intelligence. Specialized sophisticated structures would even be reduced.
So once evolution goes this way, the organism would stop to adapt to challenges by means of genetic evolution. If the plasticity, i.e. the capacity of such brain areas for learning, is high enough to develop new cognitive structures by learning, there would be no evolutionary pressure to develop such structures by means of genetic evolution, given that not having such structures is not endangering the animal. On the other hand, the development of specialized structures could reduce the plasticity, i.e. the potential of fast adaption. “Hard-wired”, genetically pre-determined structures would, however, develop (or be retained from earlier ancestral species) in cases where these structures are very useful and where developing such structures through a process of learning would take too long (See the example of the processing of primary visual information given in footnote ).
In other instances, we clearly have learnt structures. It is possible that the “motoric program” for walking is pre-formed and that the child does not have to learn walking. It is possible that instead, the structure implementing this program ripens during a certain part of development. However, it is also possible that the child actually has to learn most of this process. We may look at the example of bike riding. Bicycles did not exist before the 19th century, so bike riding is clearly a skill that we learn. There are no genes for this skill in our genome. Likewise, we have to learn other motoric skills and we learn a lot of our perception skills (i.e. recognizing specific kinds of objects). We can recognize a house or a car, and that perceptive skill is obviously learnt, not genetically preformed.
In a similar way, we might learn new mental skills. This means that the different kinds of thought processes we employ in thinking could, to a very large extent, be learnt and acquired, rather than genetically inherited. This would mean that just like the neural networks that implement a skill like bike riding or recognizing a car, those we are using to process language or to make certain types of logical inferences, for example, are learnt. If this is so, a lot of the blue prints of the architecture of our minds are not in our genes but in our cultures, and the part that is genetically determined is, to a large extent, just a developmental core, a take-off-point or scaffold for cognitive development.
(The picture is from https://commons.wikimedia.org/wiki/File:Kind_mit_Baukl%C3%B6tzen_1853.jpg.)
 Ammon, Kurt: “The Automatic Development of Concepts and Methods“, Doctoral Dissertation, University of Hamburg, 1987. Here, Ammon puts forward the hypotheses that the core of creative systems is empty.
 See Ammon, 1987.
 An example for a structure that could in principle be found by a learning process but is probably pre-formed in the genetically determined initial structure of the brain is our ability of two- and three-dimensional vision. There might be a need to fine-tune it in learning processes, but a lot of the low-level visual processing is already in place from the beginning (or as soon as the corresponding brain structures have ripened). These structures could be discovered in a learning process by interacting with an environment that is regular enough, but this would take a long time and would be dangerous. Neither the processing of sense data into two dimensional images or into three dimensional representations has to be there initially from a theoretical point of view and – against Kant’s conviction that a “form of intuition” (“Anschauungsform”) of space is a necessary prerequisite of learning – it is not necessary that a notion of space would already be there, but building up such elementary structures would take too much time.