Laws and Computability

Is nature computable?

In the philosophy of science, one model to describe scientific explanations is what is known as the “DN” (deductive-nomological”) model, introduced by philosophers Carl Hempel and Paul Oppenheim. According to this model, scientific explanations of a phenomenon (more specifically: a statement describing an observable phenomenon) E to be explained (the “explanandum”) consists of an enumeration of a number of laws L1, L2, … Lk and a number of initial conditions C1, C2, … Cr. The explanandum is then logically derived from the initial conditions and the laws.

One problem I see with this approach is that the feasibility of the derivation step is taken for granted. Hempel and Oppenheim seem to have assumed that you can just derive the explanandum E from the laws and initial conditions. But can you?

In physics, the laws would typically be systems of equations. These equations contain different operators or functions. I see no reason why all of these functions should be Turing-computable. Maybe they are and a general theory of everything, if such a theory is ever found, would be well-behaved in this respect. But I can see no necessary a priori reason that the laws of nature must be confined to Turing-computable functions. What if they are not?

What does it mean for a function not to be Turing-Computable? In mathematics and computer science, there are functions that are “Turing computable” and others that are not. If a function is not “Turing-computable” then any algorithm or system of rules (formal theory) describing how to calculate values of that function for given arguments is incomplete. So whatever set of rules you have to calculate such values, there would be instances of arguments that are in the domain of the function for which the given rules of calculation do not produce a value for the function (or not the correct one). To calculate the function for additional values, you would have to add additional rules or methods of calculation.

My suspicion that the laws of nature contain cases that are not Turing-Computable comes from the fact that physicists have to “solve” equations. This is not a mechanical process of just putting the equations and the initial conditions into a general algorithm and pressing the return key to start the calculation. In contrast, it seems to be a creative process involving some puzzle-solving. There are books with formula collections and mathematical tricks that teach you how to calculate integrals or solve certain types of equations in certain instances. There are computer programs like Mathematica incorporating the knowledge from such formula- and method-collections. But these seem to be always incomplete and they are not fully automatic. It looks like no matter how powerful such programs have become, they are essentially tool boxes requiring a scientist who will use the tools. There are also some physical problems for which general solutions are not known and maybe simply do not exist, e.g. the “three body problem”. Doing physics seems to involve a lot of creativity – and not just in the work of the experimentalist but in doing the calculations as well. This is the situation one should expect if the laws of physics contained non-Turing-computable functions. I am neither a mathematician nor a physicist. I might be wrong, but I am asking the question: can it be shown that the standard theories of physics are not computable in the sense of Turing-computability?

Let us assume that some of physics (maybe the three-body problem, I don’t know) indeed involves systems of equations that are not solvable for all cases by any single algorithm. We would then have to extend the DN-Scheme by another set of components. We would not only have the laws and the initial conditions, but also a number of calculation methods M1, M2, … Mq. To derive a particular explanandum E from the initial conditions and the laws, we would have to use a number of such computational methods. In a realistic interpretation of physics, the laws and initial conditions alone would suffice to determine E. We would assume that reality really might have the structure described by the laws and that according to those laws, E would happen (in an empiricist framework, instead, we might simply restrict our theories to what is computable, but that would restrict our possible range of theories). We could just not think of reality as a system doing computations (and thus being constrained to what can be done by algorithms, i.e. by calculations describable in terms of finite sets of rules). However, in order to logically (i.e. computationally) derive E, we would need additional axioms or rules of inference or methods of computation. If the system of laws describes a Turing-computable function who’s application to the initial condition calculates E, we can leave the computational machinery implicit, as is done in the classical DN scheme, but if they describe a function that is not Turing-computable, we need those extra rules to derive E. If we want to calculate another case from the same laws, with different initial conditions, yielding a result E1, we might need another set (or an extended set) of calculation methods. To solve the equations in this other case might require a different mathematical trick.

One problem here is that we can never be sure if all the mathematical methods we are using are correct. If we had a procedure to decide if a given calculation method is correct, we could enumerate all the correct methods by generating all possible programs from a given programming language and then deciding if they are correct. We could therefore describe all correct methods in a finite way and add the method-generator to our theories. Any non-Turing-computable theory would then become Turing-computable. So assuming such a procedure can exist leads to a contradiction.

But if we cannot be sure that our calculation methods that we use to solve equations are correct, we have a problem with testing our theories. Nothing would prevent us from using wrong methods and thus deriving wrong predictions from our laws. When we test these, we can only rule out the complete system consisting of the laws and the calculation methods used for the particular case, but we cannot rule out the laws (at least not in such a simple way). In some instances, we also might simply not know how to solve the equations. We could resort to methods of approximation or the like, but these are not guaranteed to yield correct solutions. We might know the general theory of everything (although we could not really test it), but it would be of limited use. In practice, we would still use a large number of partial more or less accurate models.

There is also the possibility that there are laws that are Turing-computable but not computable in practice because their computational complexity is too high, i.e. doing the calculations is practically infeasible since it would take too much of resources of time and memory. For practical purposes, we would have to treat such cases just like the non-Turing-computable ones. There is no reason why nature should not contain mechanisms that cannot be calculated with any efficient algorithm. Some people seem to think of physics like some calculation process, as if we were living inside a giant computer simulation, but that model might be completely wrong. There might be processes that just happen in fractions of seconds but calculating them might be next to impossible. We might then have shortcuts for special cases (which would also have to be added as M1, M2, … Mq to our derivational model of explanation) but there would be no general efficient method.

It seems to me that, due to such problems of computational complexity or even lack of Turing-computability, physics is less deductive than the classical models in the philosophy of science may suggest. It requires creativity. It might be that all physical descriptions are incomplete in principle and that physicists, just like people working in other disciplines, will always have to work with limited, incomplete and partially incorrect models. Think of meteorology to understand what I mean.

One might ask then how important it is to find the general theory of everything. Maybe it would take tremendous resources but would not be that useful in the end. Maybe testing such a theory would be impossible because we cannot do the calculations or we cannot be sure that our calculations are correct.[1]

However, this would also mean that we will not see the end of physics. If it should turn out to be like this, physicists should not be too unhappy about it. They will not be out of job so soon.

(The picture is from http://en.wikipedia.org/wiki/File:CMS_Higgs-event.jpg.)[2]

[1] I just want to hint at the interesting philosophical question if positivists or logical empiricists would view a physical theory containing non-Turing-computable functions as a metaphysical entity because of its limited testability. Would they require restricting their theories to what is Turing-computable (One condition in Hempel’s original model is that laws must be empirically testable. Laws involving non-computable functions would only be testable to a limted extend (maybe restricting them to special cases)). Would they demand the restriction of science to what can be described in terms of finite formal theories or algorithms no matter how reality (a metaphysical entity from their point of view) might actualy be structured?

[2] The picture is a visualization of simulated data of an event in a particle accelerator. I am aware of the basically statistical nature of the laws involved here and of the fact that scientists and philosophers of science, including Hempel himself, have created models of explanation involving statistical laws that are different from the DN scheme. However, that goes beyond the scope of this article. For the sake of this article, the picture is only intended to symbolize physics on the lowest level of reality currently accessible to us.