9.8 The Gestalt of Determinism

Determinism is commonly defined as the proposition that each event is the necessary and unique consequence of prior events. This implies that events transpire in a temporally ordered sequence, and that a wave of implication somehow flows along this sequence, fixing or deciding each successive event based on the preceding events, in accord with some definite rule (which may or may not be known to us). This description closely parallels the beginning of Laplaces famous remarks on the subject:

We ought then to regard the present state of the universe as the effect of the anterior state and as the cause of the one that is to follow

However, at this point Laplace introduces a gestalt shift (like the sudden realignment of meaning that Donne often placed at the end of his "metaphysical" poems). After describing the temporally ordered flow of events, he notes a profound shift in the perception of "a sufficiently vast intelligence"

...nothing would be uncertain, and the future, as the past, would be present to its eyes.

This shows how we initially conceive of determinism as a temporally ordered chain of implication, but when carried to its logical conclusion we are led inevitably to the view of an atemporal "block universe" that simply exists. At some point we experience a gestalt shift from a universe that is occurring to a universe that simply is. The concepts of time and causality in such a universe can be (at most) psychological interpretations, lacking any active physical significance. In order for time and causality to be genuinely active, a degree of freedom is necessary, because without freedom we immediately regress to an atemporal block universe, in which there can be no absolute direction of implication.

Of course, it may well be that certain directions in a deterministic block universe are preferred based on the simplicity with which they can be described and conceptually grasped. For example, it may be possible to completely specify the universe based on the contents of a particular cross-sectional slice, together with a simple set of fixed rules for recursively inferring the contents of neighboring slices in a particular sequence, whereas other sequences may require a vastly more complicated "rule". However, in a deterministic universe this chain of implication is merely a descriptive convenience, and cannot be regarded as the effective mechanism by which the events "come into being".

The static view is fully consistent not only with the Newtonian universe that Laplace imagined, but also with the theory of relativity, in which the worldlines of objects (through spacetime) can be considered to be already existent in their entirety. (Indeed this is a necessary interpretation if we are to incorporate worldlines actually crossing event horizons.) In this sense relativity is a purely classical theory. On the other hand, quantum mechanics is widely regarded as decidedly non-deterministic. Indeed, we saw in Section 9.6 the famous theorem of von Neumann purporting to rule out determinism (in the form of hidden variables) in the realm of quantum mechanics. However, as Einstein observed

Whether objective facts are subject to causality is a question whose answer necessarily depends on the theory from which we start. Therefore, it will never be possible to decide whether the world is causal or not.

Note that the word "causal" is being used here as a synonym for deterministic, since Einstein had in mind strict causality, with no free choices, as summarized in his famous remark that "God does not play dice with the universe". We've seen that von Neumanns proof was based on a premise which is effectively equivalent to what he was trying to prove, nicely illustrating Einsteins point that the answer depends on the theory from which we start. In other words, an assertion about what is recursively possible can be meaningful only if we place some constraint on the complexity of the allowable recursive "algorithm".

For example, the nth state vector of a system may be the kn+1 through k(n+1) digits of p. This would be a perfectly deterministic system, but the relations between successive states would be extremely obscure. In fact, assuming the digits of the two transcendental numbers p and e are normally distributed (as is widely believed, though not proven), any finite string of decimal digits occurs infinitely often in their decimal expansions, and each string occurs with the same frequency in both expansions. (It's been noted that, assuming normality, the digits of p would make an inexhaustible source of high-quality "random" number sequences, higher quality than anything we can get out of conventional pseudo-random number generators). Therefore, given any finite number of digits (observations), we could never even decide whether the operative "algorithm" was p or e, nor whether we had correctly identified the relevant occurrence in the expansion. Thus we can easily imagine a perfectly deterministic universe that is also utterly unpredictable. (Interestingly, the recent innovation that enables computation of the nth hexadecimal digit of p (with much less work than required to compute the first n digits) implies that we could present someone with a sequence of digits and challenge them to determine where it first occurs in the decimal expansion of p, and it may be practically impossible for them to find the answer.)

Even worse, there need be no simple rule of any kind relating the events of a deterministic universe. This highlights the important distinction between determinism and the concepts of predictability and complexity. There is no requirement for a deterministic universe to be predictable, or for its complexity to be limited in any way. Thus, we can never prove that any finite set of observations could only have occurred in a non-deterministic algorithm. In a sense, this is trivially true, because a finite Turing machine can always be written to generate any given finite string, although the algorithm necessary to generate a very irregular string may be nearly as long as the string itself. Since determinism is inherently undecidable, we may try to define a more tractable notion, such as predictability, in terms of the exhibited complexity manifest in our observations. This could be quantified as the length of the shortest Turing machine required to reproduce our observations, and we might imagine that in a completely random universe, the size of the required algorithm would grow in proportion to the number of observations (as we are forced to include ad hoc modifications to the algorithm to account for each new observation). On this basis it might seem that we could eventually assert with certainty that the universe is inherently unpredictable (on some level of experience), i.e., that the length of the shortest Turing machine required to duplicate the results grows in proportion with the number of observations. In a sense, this is what the "no hidden variables" theorems try to do.

However, we can never reach such a conclusion, as shown by Chaitin's proof that there exists an integer k such that it's impossible to prove that the complexity of any specific string of binary bits exceeds k (where "complexity" is defined as the length of the smallest Turing program that generates the string). This is true in spite of the fact that "almost all" strings have complexity greater than k. Therefore, even if we (sensibly) restrict our meaningful class of Turing machines to those of complexity less than a fixed number k (rather than allowing the complexity of our model to increase in proportion to the number of observations), it's still impossible for any finite set of observations (even if we continue gathering data forever) to be provably inconsistent with a Turing machine of complexity less than k. (Naturally we must be careful not to confuse the question of whether "there exist" sequences of complexity greater than k with the question of whether we can prove that any particular sequence has complexity greater than k.)

Return to Table of Contents

Сайт управляется системой uCoz