Formalism vs. Creativism

File:John McCarthy Stanford.jpg

A generally intelligent system should be universal in the sense that it can produce arbitrary knowledge. The proof essentially shows that an algorithm cannot be generally intelligent in this sense. The reason why, after about 60 years of research, general intelligence did not show up among the systems built by the artificial intelligence community is that their basic hypothesis – that intelligence can be achieved by an algorithm – is wrong. The idea that intelligence can be captured in terms of a formal theory is wrong.

This can mean one of the following:

  1. “Formalism:” Human beings are completely describable in terms of algorithms or formal theories. AI is on the right track, and eventually the intelligent algorithm will be found. However, this would mean (and perhaps some AI researchers are not aware of this consequence) that human intelligence is not universal. There are possible structures or patterns that human beings cannot discover and understand in principle, due to the restrictions of their innate algorithm.
  2. “Creativism:” Human beings are not describable in terms of algorithms or formal theories. Parts of human cognitive processes might be describable in terms of algorithms or formal theories, but the totality of the human mind cannot. This means that there are no fixed laws of thinking. The laws of thinking develop historically (or – during the life of the single human being – biographically). They are subject to historical change and may be different from time to time, from culture to culture and from one individual to another. If this is so, cognitive psychology turns out to be a historic discipline, not a science in the narrower sense of the word “science”. The epistemology of the sciences, based on establishing general laws and on reproducibility and falsifiability, does not work here. The human ability to understand the world is not limited by the innate (or a-priori) structure of thought because this structure can change. Human beings do not have a fixed nature because they can develop away from their natural core, which only forms a point of departure of cognition, not a fixed structure underlying it. All descriptions of humans, human societies, and human cultures are partial (incomplete) or partially vague. The academic disciplines concerned with humans, their societies and cultures (the humanities, the “Geisteswissenschafen”, the “Kulturwissenschaften” etc., including psychology and the “social sciences”) are not part of science, are not reducible to science and never will be.
  3. b) may have variants. There is b-1) an interpretation of the mind as something non-physical (spirit, “Geist”, res cogitans etc.), e.g. there are dualist theories and these have dominated most of traditional philosophy and also underlie most religions. On the other hand, there is b-2) the possibility that the mind is completely physical. If this is so and we hold b) to be correct, we have to explain how it is possible to have a physical process that cannot be completely formalized. Would the application of the laws of physics to the physical components of the brain not lead to a formal theory of the brain, and if so, would this not necessarily lead to a)? I have begun addressing this question in Laws and Computability. Indeed I think that the formalizable is a true subset of the physical, i.e there are physical processes that do not have a complete formal description.

This blog, in short, is based on possibility “b-2)”. A large part of analytic philosophy, AI and cognitive science (and related areas of psychology and linguistics) seem to be based on a). The consequence of a), that cognition and the ability to acquire knowledge of the world, would be limited, is not seen or simply ignored by most people following this “formalistic” paradigm. This is astonishing since the basic mathematical insights leading to b) where developed in mathematics (by people like Gödel and Post) in the 1930s and 1940s, before AI started developing around the mid-1950s.

Note that if b-2) is correct, artificial intelligence might still be possible, but not the way it has been attempted in the past. However, such an artificial intelligent system would not be limited to algorithms. The way it works could change (while algorithms are unable to evolve themselves), leading to a “fluid” structure of thinking that cannot be described completely in terms of any single formal theory. It would be a system that develops and that could not be understood completely beforehand. Its study would belong into the humanities. It is a different question, however, if it would make any sense to build such a system and if it would be ethical to do so.

(The piture is from https://commons.wikimedia.org/wiki/File:John_McCarthy_Stanford.jpg. It is showing John McCarthy, one of the founders of the classical AI research paraigm, see https://en.wikipedia.org/wiki/John_McCarthy_(computer_scientist)).

Advertisements

One comment

  1. […] the previous article, I have introduced a distinction between a) “formalistic” and b) “creativistic” theories of […]

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: