Conclusion

Despite the spectacular success of Einstein's theory of relativity, it is sometimes said that tests of Bell's inequalities and similar quantum phenomena have demonstrated that nature is, on a fundamental level, incompatible with the local realism on which relativity is based. However, as we saw in Section 9.8, Bell's inequalities apply only to strictly non-deterministic theories, so, as Bell himself noted, they do not preclude "local realism" for a fully deterministic theory. Needless to say, the entire framework of classical relativity, with its unified spacetime and partial ordering of events, is coherent only from a strictly deterministic point of view, so Bell's inequalities do not strictly apply. Admittedly the phenomena of quantum mechanics are incompatible with at least some aspect of our classical (metrical) idea of locality, but this should not be surprising, because, as I've tried to show in the preceding sections, our metrical idea of locality is already inconsistent with the quasi-metrical structure of spacetime itself, which forms the basis of modern relativity.

It's tempting to conclude that while modern relativity initiated a revolution in our thinking about the (quasi) metrical structure of spacetime, with its singular null rays and non-transitive equivalencies, the concomitant revolution in our thinking about the topology of spacetime has lagged behind. Although we long ago decided that the physically measurable intervals between the events of spacetime cannot be accurately represented as the distances between the points of a Euclidean metric space, we continue to assume that the topology of the set of spacetime events is (locally) Euclidean. This incongruous state of affairs may be due in part to the historical circumstance that Einstein's special relativity was originally viewed as simply an elegant interpretation of the existing Lorentz Ether Theory. According to Lorentz, spacetime really was a Euclidean manifold with the metric and topology of E4, on top of which was superimposed a set of functions representing the operational temporal and spatial components of intervals. However, these were regarded as mere appearances, on top of the "actual" components that exhibited the E4 topology.

It was possible to conceive of this because the singularities in the mapping between the "real" and "operational" components along null directions implied by the Minkowski line element were not necessarily believed to be physical, because the validity of Lorentz invariance was just being established "one order at a time", and it wasn't clear that it would be valid to all orders. The situation was somewhat akin to the view of some people today, who believe that although the field equations of general relativity predict a genuine singularity at the center of a black hole, we may imagine that somehow the laws break down at some point, or some other unknown effect takes over and the singularity is averted. Around 1905 people could think similar things about the implied singularity in the full n-order Lorentz-Fitzgerald mapping between Lorentz's "real spacetime" and his operational electromagnetic spacetime, i.e., they could imagine that the Lorentz invariance might break down at some point short of the singularities. On this basis, we can make sense of continuing to use the topology of E4. This is why the original Euclidean topology of Lorentz's absolute spacetime is still lurking just beneath the surface of modern relativity.

However, if we make the judgement that Lorentz invariance applies strictly to all orders (as Einstein boldly asserted in 1905), and the light-like singularities of the Lorentz-Fitzgerald mapping are genuine physical singularities, albeit in some unfamiliar non-transitive sense, and if we thoroughly disavow Lorentz's underlying "real spacetime" (which plays no role in the theory) and treat the "operational spacetime" itself as the primary ontological entity, then there seems reason to question whether the assumption of E4 topology is still suitable. This is particularly true if a topology more in accord with Lorentz invariance would also help to clarify some of the puzzling phenomena of quantum mechanics.

Of course, it's entirely possible that the theory of relativity is simply wrong on some fundamental level where quantum mechanics "takes over". In fact, this is probably the majority view among physicists today, who hope that eventually a theory of "quantum gravity" will be found which will explain precisely how and in what circumstances the theory of relativity fails to accurately represent the operations of nature, while at the same time explaining why it seems to work as well as it does. However, it may be worthwhile to remember previous periods in the history of physics when the principle of relativity was judged to be fundamentally inadequate to account for the observed phenomena. Recall Ptolemy's arguments against a moving Earth, or the 19th century belief that electromagnetism necessitated a luminiferous ether, or the early-20th century view that Einstein's special relativity could never be reconciled with gravity. In each case a truly satisfactory resolution of the difficulties was eventually achieved not by discarding relativity, but by re-interpreting and extending it, thereby gaining a fuller understanding of its logical content and consequences.

Сайт управляется системой uCoz