Knowing How and Knowing That

File:Atlas Ortelius KB PPN369376781-001av-001br.jpg

In one comment to my previous post, the question of “episteme (knowing that) versus techne (knowing how)” was brought up. Traditional epistemology focuses very much on propositional knowledge and its truth conditions, i.e. knowledge in the form of statements. The reason for this focus might go back to antiquity, where a clear hierarchy of different forms of knowledge was seen and “techne” (τέχνη) was assigned an inferior position, whereas “episteme” (ἐπιστήμη), a term that can be translated as “science” in some contexts but might just refer to knowledge in the sense of “knowing that” – as indicated by that commentator – was seen as a superior and more interesting form of knowledge.

Most philosophers of those times where aristocrats, who left physical labor to slaves and women. The topics viewed as worthy of their attention and time where theoretical considerations and political discussions, not practical work and practical activity (perhaps with the exception of warfare and sports). Consequently, a philosophy of technology first appears much later, in the 19th century. And although the social structures changed considerably, epistemology (as the name of the discipline implies) remained largely concerned with propositional knowledge and its truth conditions. The other discipline relevant here, logic, comes out of the same tradition, as indicated by its name which is derived from the word “logos” (λόγος), a term with many meanings but originally referring to language.

In my previous article, however, I had hinted at a procedural concept of knowledge. In this view, pieces of knowledge may be viewed as little programs. If you come from the classic western philosophical tradition, such a view might indeed look surprising, so let me try to explain it in more detail.

For a bit of knowledge in the sense of “techne” (knowing how), it appears as quite straight-forward to be viewed as a program. For example, if you drive a car, there seems to be something like an automatic program coordinating your movements but also processing sensual, especially visual, input, so that you can drive without much conscious thought of it, once you have mastered the skill (see also “On the Philosophy of Taxi Driving”).

But what about knowledge about facts? Let’s stay with that example of car driving. Suppose you want to visit a friend and you know that he is living in a particular street. You know that that street is connected to another street and that that second street is connected to the street where you are. You can now use that knowledge (of the “knowing that” variety) to construct a plan of how to drive to your friend’s place. Clearly, that plan can, again, be seen as something procedural, an instance of “knowing how”. So know-how can be derived from “know-that”. Moreover, as I said, you are “using” that knowledge. So there is some kind of process going on. One could think of this process (of planning the way) as the execution of a program.

When you execute your plan and drive to your friends place, there seems to be another program that executes the plan (and “knows” how to translate it into single movements). In computer science, a program that executes another program is called an “interpreter”. You can think of an interpreter as a program with two entries (or “parameters”, to use a more technical term). You put the “program” into one parameter and the “data” the program is supposed to work on into the other. The interpreter will then apply the program to the data, and produce a result. (Note that the concept of an interpreter as it is used here is connected to the concept of an “(object) projector” I have introduced in Generating Objects – Towards a Procedural Ontology).

The planning process can also be thought of as the execution of a program. Its input is the knowledge about the network of streets, the knowledge about where you currently are and the knowledge about where you want to go. Its output is the plan.

You may now view this planning process as an interpreter as well. The knowledge about the street network then appears as a program. The information about where you are and where you go to is the data. The output is the plan (or, if you treat the interpreter for the plan as part of a larger interpreter comprising both), the output is a sequence of movements that finally brings you to your destination. In this view, the declarative knowledge about the street knowledge (“knowing that”) is treated as a high level program.

As this example is demonstrating, declarative or propositional knowledge can be treated as a program, as soon as it is used for something. More generally, any kind of data that influences the processing of some other data (e.g. a graphical representation of a map or a representation of the map in terms of a set of database tables) can be viewed as a program, if you look at it in the context of some system interpreting it. The meaning of the high level program depends on that interpreting system, so taken alone, its meaning might be vague, but in the context of a specific system processing it, it can be viewed as a program.

The distinction between “knowing how” and “knowing that”, therefore, is not as fundamental as it appears in traditional philosophy. It is a matter of the level of description you apply. I think that traditional epistemology has limited itself unnecessarily by this concentration on propositional knowledge, just as traditional logic has limited itself by excluding the procedural character of inferences from consideration. In the evolution of our species, procedural knowledge probably precedes propositional knowledge. I suppose that procedural knowledge is the primary one and that other forms of knowledge, especially propositional knowledge, are implemented on top of this procedural world. This “propositional bubble” embedded into the procedural environment is always incomplete and partially vague since the processes beneath it, processes that interpret, use and change it, may change. If we limit epistemology to what is inside this “bubble”, we are therefore always limited and perhaps here is one of the sources of the problems of traditional approaches to philosophy and logic. We are not getting the complete picture if we only look at propositional statements.

So I suggest that we investigate possibilities of developing “epistemology” into a discipline dealing with “techne” as well (unfortunately, the term “technology” that suggests itself here is already occupied). Approaches into this direction have traditionally been dismissed as “psychologistic”, but in a time of artificial information processing systems, we are no longer tied exclusively to psychology here and can investigate such matters in a more general framework (e.g., the car driving example can be investigated in the context of electronic navigation systems and self-driving cars as well). However, the separation of epistemology from psychology is in itself a symptom of that anti-practical bias.

Perhaps philosophy is, in parts, still suffering here from the anti-practical biases of antique philosophers. It is time to finally change that, to go beyond the agora and enter the workshops.

(The picture, showing a historical map, is from http://commons.wikimedia.org/wiki/File:Atlas_Ortelius_KB_PPN369376781-001av-001br.jpg. It shows and example of how knowledge can be represented in a non-procedural way, in a mixture of graphics and language. This knowledge is incomplete and partially faulty, but it could be turned into the procedural actions of a ship’s captain and crew, by applying the sailor’s knowledge to it as the “interpreter”. This would lead into situations that would then give rise to its incremental correction.)

Advertisements

24 comments

  1. What I can say (and not getting into computer programming) but trying to get hold of this ontological process, and looking for ontology as the philosophical study of the nature of being, becoming, existence, or reality, ontology seems to deal with questions concerning what entities exist or can be said to exist, and how such entities can be grouped, related within a hierarchy, and subdivided according to similarities and differences, I can say that you can “know’ something, and memorize it and remember the data in the future for analysis; yet considering how the human mind works, this “knowing” is just not enough. The psychomotor response after a sensory experience, is that of ‘knowing how’, and perception, not knowledge, is what seems to trigger the motor system to respond.

    1. One could use the term “ontology” only for the question of what exists at the most fundamental level, but you can use it also with regard to everyday objects. As philosophers are the people who are always writing about tables ;-), let me take that standard example as well: in which sense does a table exist. You may view it as a chunk of elementary particles held together by electromagnetic and other forces, but then you miss its “tableness”. I think as a table it exists by means of a process of perceiving and using it as a table and these are human activities. These activites are processes in time and I think they can be thought of as something alike to running programs. The everyday world with its functional interrelations is the product of processes. To analyze this world as we experience it, we may study our processes of perception and of using these things. If we would all disappear, these things would only be clouds of particles again.

      1. I suppose it can be thought of also as relating to ‘artificial intelligence’, the intelligence exhibited by machines or software. Quote:
        “Knowledge representation and knowledge engineering are central to AI research. Many of the problems machines are expected to solve will require extensive knowledge about the world. Among the things that AI needs to represent are: objects, properties, categories and relations between objects; situations, events, states and time; causes and effects; knowledge about knowledge (what we know about what other people know); and many other, less well researched domains. A representation of “what exists” is an ontology: the set of objects, relations, concepts and so on that the machine knows about. The most general are called upper ontologies, which attempt to provide a foundation for all other knowledge”-Wiki
        Also:

        “Evolutionary computation uses a form of optimization search. For example, they may begin with a population of organisms (the guesses) and then allow them to mutate and recombine, selecting only the fittest to survive each generation (refining the guesses). Forms of evolutionary computation include swarm intelligence algorithms (such as ant colony or particle swarm optimization) and evolutionary algorithms (such as genetic algorithms, gene expression programming, and genetic programming).”

        I understand these things at a very basic level, however, but find it so amazing that they still seem to copy from Nature, even in these computerized processes.

        1. Yes, what I am writing here is obviously connected to AI. However, conventional AI has a problem that might be rooted in traditional rationalism: there is an assumption that there are fixed laws of thought, something like a fixed algorithm. The task is then understood as finding out how that algorithm works.
          The central idea in the approach taken by my friend Kurt Ammon and myself is, however, that there are no fixed laws of thought or cognition. The mind is changed so that any fixed description of it is incomplete. That is what this post (together with some others) is finally aming at: there is no algorithm of thought. What is there at any time might be described in terms of algorithms but they change. In this sense, the core of intelligence is this reprogrammability, or creativity.
          As a result, cognition lies outside the scope of normal science if science is to be understood as finding the (fixed) laws by which something works. Cognition (and its product: culture) always has a historical component. The results that have come out of AI are relatively modest compared to what was expected in its beginning in the 1950s and 1960s. The reason, I think, is exactly this: the AI-researchers failed to understand that the problem was to find a system that could change completely, not to find the laws and rules according to which thinking works. An intelligent system is a system that cannot be described completely, i.e. it cannot be understood completely, although in hindsight, you can always understand what has been going on, but there is no complete theory comprising all thought processes because the system can always change.

          1. Okay, thanks for explaining that, your writing is very interesting too.

        2. They tend to follow a pattern, most likely from Nature (swarm intelligence), which they have assimilated to create the AI, which still leaves me wondering whether is self-sustainable, or attempts to accomplish it via the AI, not that it is negative if it does, or doesn’t.

  2. There can be ‘units’ of knowledge, but it doesn’t mean that because they are there, they are executable. They may be programmable, but only executable to a certain point. Thinking of this coincides with my concept of ‘creating what has already existed’; therefore the love for epistemology: EPISTEMOLOGY: the study or a theory of the nature and grounds of knowledge especially with reference to its limits and validity: if according to my concept, epistemology is a never-ending process, because it is continually fed with the input. The output is the concern because it’s usually looking for a common language.

    1. What is executable, I think, is a matter of how you look at it. Let’s take a simple example, Aristotelian syllogisms. One can treat a rules of inference, e.g. the “modus ponens” rule, as a small programm. This program takes two statements A and B as its input and produces a statement C as output. To take a classical example, the statements, if you take the statements “All human beings are mortal” and “Socrates is a human being” as inputs, the rule produces “Socrates is mortal” as its output.

      Now, you can also treat the modus ponens rule as an interpreter for a simple “programming language” and treat the statement A as a program. In this view, “All human beings are mortal” can be viewed as a program that takes “Socrates is a human being” as its input and produces “Socrates is mortal” as its output.

      In the first view, A is just a piece of data. In the second view, A is a small program.

      If you look at a information processing system that contains logical rules like the modus ponens and is able to apply them, and also contains some facts to start from, you can view this system as a formal theory in which a certain set of facts can be derived. You can also view it as a program that derives facts. You can represent this program by a Turing machine. The point here is that if you add some fact or piece of information or data that cannot be derived within the theory, you get a new theory that can do more, and can be represented by a different Turing machine. So adding information means modifying the program. And you can view these new bits of information as little programs themselves if the system can apply them on other bits of information.

      As a result, there is no single formal theory or Turing machine that describes the whole system. The system can always move out of the scope of description of any formal theory about it, by incorporating new information. In order to do so, two conditions must be met: the system must be able to store information and it must be able to apply this new information on other information already contained in it or entering it later (and this applicability means that you can view the applied information as a program).

      1. Fascinating Nannus, thanks for explaining all this to me.

  3. Nannus, you so stretch my brain that my hair gets freezy!
    Stretching is always good even when it hurts. 🙂

    1. I was hesitating to reblogg this here on the Asifoscope. There are other articles (even worse than this) that I did not put here. However, in one of my firsts posts ever, I anounced such stuff and apologized for it, see http://asifoscope.org/2012/11/30/nerdy-stuff/ 🙂

  4. I’m not sure I’m following all this, and this is probably because I know nothing about computers. I do agree that procedural knowledge precedes propositional knowledge in many cases. Maybe they are somehow simultaneous, and yet the propositional takes the spotlight.

    1. I know this one is too technical. I considered trying to explain things using some notation (beyond some point, using a notation makes things easier, but notations will drive readers away as well).
      Maybe look at the modus ponens example in my reply to one of Maria’s comments.

      1. I don’t know that it’s too technical. It’s probably just my brain fog!

        I was looking at the example and I still don’t really get the computer analogy as applied to Aristotelian logic. There are rules to logic and these dictate the form of a valid argument. You can put in whatever premises you want and you will come up with a valid argument, but it may or may not be sound. I don’t know how or if soundness can be programmable. Or if anything that I’m saying has anything to do with what you’re talking about.

        1. It is certainly not your “brain fog”. The problem is that I did not pick you up where you were standing. If you are standing in place A and I am trying to pick you up in place B, that will not work.
          To understand what I have written, you need to have certain knowledge. Now, there are some concepts and ideas I am taking for granted that you don’t have (that is the distance between A and B, in the metaphor of picking you up I have just used). For example, there are concepts of programming languages and their interpreters that I find totally natural because I have studied computer science 30 years ago but for you these concepts are just gobbledygook. If you would learn these concepts, you would then be able to understand my article (you have walked from A to B). You would then say “Ah, OK” Or “Hm, but that is nonsense for this or that reason”. Instead, when you are trying to read my article, your reaction is “I’m not sure I’m following all this”. So currently if you apply the knowledge you have to the article, you get a foggy feeling (I know how that feels, I feel that way when I am trying to understand Husserl or Hidegger  ).

          Think of my article as a piece of data and your brain an information-processing system. You could learn that extra bit of knowledge and then apply it and make sense of my article. You could then think of it like this: that extra knowledge is like a little program, something like an “app” or “plug in” you put into your brain that enables you to process those strange articles I am writing. Information that is used to understand something or that is applied to “digest” some other information can be viewed as something procedural, as a program (even if you receive it in the form of declarative statements). Knowledge that can be used in a process of thinking or perception to make sense of or otherwise process some other information can be viewed as a program processing that information. And adding that little program to your brain changes the way your brain works (so it is a risky thing to do  ).

          The point of the whole discussion is that our cognition can be changed. There is no fixed way of how it is working because any knowledge we add changes the way it is working.

          Don’t know if that helped. I don’t know at the moment to explain it better.

          1. Does this mean I could learn how to solve word problems in math? If only I could buy the app! (I’ve never been good at those. I usually don’t even know what the question is.) 🙂

            1. You will have to get the meta-app instead, the app-programmer’s toolbox and programming environment. However, it is a laborious process. And no way to guarantee that there are no bugs.
              Personally, there are limits of what I can do in math. I never really mastered calculus and since forgot everything since I never needed it. I have agood grasp of some aspects of theoretical computer science (also math, but different branches of it, but I don’t know what to do with integrals and stuff like that).
              There is nothing you cannot learn but you cannot learn everything (a consequence of the fact that there is no single theory that covers all cognitive processes, so new special expert knowledge is always necessary again and again).
              To learn math, one has to do math. It is not a single method. Each calculation and each proof is different and different knowledge is required. There will never be a complete collection of formulas and methods for math. And to get into it, you have to do it. So it consists of some very general apps and a lot of very special ones, and you always need more.

              1. Oh my, I never even took calculus! The only math I took in college besides a very rudimentary course was “Applications of Modern Math” which ended up being a series of word problems. These were supposed to be easy for people like me who exist more on the English major side of things. That same enrollment counselor sent a bunch of math illiterate people into one of the hardest math courses I could dream of. I remember taking home the homework on the first day, seeing there was only one problem and chuckling to myself, “This is gonna be easy. Math for English majors indeed.” I made the mistake of waiting until the last minute to do that problem. I never made that mistake again. That one problem took pages and pages of calculations , all of which had to be shown on paper. I should have known something was amiss when the instructor allowed us to use calculators, even the good ones. When I read the problem, I couldn’t even determine what it was asking. I had to go to a tutor to even turn the thing into math. The whole class would have failed except the instructor graded on the bell curve. I’d never seen anything like it.

                I did the work, but it never paid off. I did make a good grade by going to a tutor and studying for five hours a day, plus the curve which turned what should have been a C into an A, but I didn’t learn anything, unfortunately. AND I got a C in creative writing, believe it or not—the only C in my life—because I blew it off in favor of this math course which I took way too seriously.

                I kind of wish there were one single app I could just buy for maybe $4.99 and be done with it. I’m lazy. 🙂

                Although nowadays I don’t really need to do much math. The last time I did math was for my novel, believe it or not. I had to prove the equality of the middle segments on the divided line in the Republic. I didn’t want to do the work myself, so I had my husband take the problem to a mathematician friend. He couldn’t figure it out, so I Googled it. I came up with tons of papers saying THAT the middle lines were equal, but no readable proofs. I could see that others had figured it out, but I wanted a simpler way to explain it that would make sense to people like me who don’t do math. I spent days and days working on this thing and reading Euclid just for some little footnote that no one will read. However, I think I’ve got it. (I get a little OCD sometimes.)

                So true that you can’t learn everything. I’ll stick with Heidegger. That’s about my limit of intellectual torture. 🙂

        2. Logic as such is not important for what I was trying to explain in this article. That was just an example to demonstrate how a parameter of a program can itself be interpreted as a program. However, it was a bad example from a didactical point of view because you are not used to look at rules of inference as something that can be programmed, so your pre-existing understanding of logic is getting in the way.

          1. Haha…yes I find that my pre-existing knowledge does that a lot. I’m always tripping over it. 🙂

  5. It’s good to stimulate those neurons with writings such as these! So continue to do so, as my neurons will ever be so grateful! The more neurons work, the more we preserve our brain functions as we age!

    1. 🙂 I will do my very best!

  6. […] Knowledge can be viewed as consisting of programs. Declarative knowledge can be viewed as programs as well as soon as it is applied in some cognitive process. I have written about this point already before (see Knowing How and Knowing That). […]

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: