Two Conditions of Extensibility

In my article Changing the Mind I had referred to a certain type of formal theories used within the research paradigm of cognitive science. The argument given there was that theories of cognition formulated in terms of such theories must be incomplete because they can be extended by new information that cannot be derived within the given theory.

However, I used this type of theories only as a peg for the argument. This argument of extensibility or programmability does not depend on the specific kind of formal theories like logic based or rule based systems used in some approaches of cognitive science and artificial intelligence that where in fashion in the 1960s, 1970s and 1980s. The same kind of argument works just as well if you use a connectionist approach where you describe cognition in terms of neural networks. You could build a simulation model based on simulated neurons (see An Open Letter to the Human Braun Project). You could simulate neurons by simple rules, in the way they are modeled in the theory of “abstract neurons” or by a set of differential equations. It does not matter. What you are getting is a formal theory of the cognitive system that would contain only a finite amount of information and thus can only cover a limited sub-set of the world. And this theory would be extensible by new information, so it would be incomplete both as a theory of the world and as a theory of cognition. Note that this might be information entering the system through its senses but it may just as well be the result of a random process inside the system itself.

For this all that is necessary to extend (or reprogram) the cognitive system is:

a) a memory capable of storing new information and

b) a mechanism by which this new information can influence the processing of further information.

Note that condition b) can especially be satisfied by using a representational language that has the expressive power of a programming language. Note also that neurons as well have the capability of both storing information and of making the processing of information dependent on other information, so a cognitive system implemented in terms of neurons can have this property of reprogrammability. There is a model of neurons called “abstract neurons” that can be viewed as a model of neural networks and for which it has been shown that it has the expressive power of programming languages.

Artificial electronic systems for information processing also contain components to store information. For information processing, the basic elements of such systems, transistors, also have the capability of influencing a stream of information by another stream of information. This capability is reflected in programming languages in which such dependent processing can be formulated. As long as we can extend the storage, a complete theory of such systems is not possible because an extension of the system is always possible (and I think this is the case at least for the internet).

If such systems like the internet or human culture in general (of which the internet is a part) cannot be described completely in terms of any formal theory, the approach of science used in physical sciences cannot be applied in the realm of human culture (or can be applied only on restricted sub-sets of it) since this approach relies on descriptions in terms of formal theories. To describe human cognition and human culture in general you would have to expect that there is a need for a multitude of different methods and different partial descriptions, but never a single description covering everything. Moreover, the descriptions you would come up with describe historical phenomena and are themselves subject to historical change.

And this is just the situation you’ll find in the humanities.

(The picture is from http://commons.wikimedia.org/wiki/File:Neuronehisto.jpg.)

Advertisements

5 comments

  1. toad (chris jensen) · · Reply

    Your knowledge of said topic, i find outstanding “AI” as i must comment has been learning through human experiences described by those throughout the net, which i find useless, “AI” would be better off to be learning not only through human experiences, but all sentient life forms…

    1. Hi Chris. You have a point here. Natural evolution developed intelligence in a long process that involved other sentient, but less intelligent life forms. It did not start with a very complex thin like the human being. We came much later in the story.

      In fact, some researchers, e.g. in robotics, are now trying to model simpler organisms, like insects. The idea is to do a bottom-up-approach, start with something simple and then make it more complex bit by bit. We will see what comes out of this line of research.

  2. […] like the environment, and then use this information in the processing of other information (see Two Conditions of Extensibility). The new information becomes part of the systems active information. In this way, the system can […]

  3. […] like the environment, and then use this information in the processing of other information (see Two Conditions of Extensibility). The new information becomes part of the systems active information. In this way, the system can […]

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: