More on Exhibited Conservation in QM

In another note I argued that conservation laws single out the 
predictions of quantum mechanics in the sense that QM behavior 
minimizes the cumulative divergence in the net measured values of 
"conserved" quantities.  Admittedly that article dealt explicitly 
only with a very simple special case, namely an idealized two-particle
EPRB experiment.  The results are consistent with - but certainly 
don't prove - the more sweeping hypothesis that quantum mechanics *in
general* predicts joint probabilities for the results of measurements
such that the net overall divergence of our classical "conserved 
quantities" is minimized when evaluated over all possible measurements.
On the other hand, I think this hypothesis has a plausibility from 
general considerations that extend beyond the simple EPRB context.  
In a way it can be regarded as an extension (and quantification) of 
the traditional "correspondence principle", although it shifts the 
focus from the density of the wave function to the distribution of 
the actual results of measurements and interactions.  (This shift has
some inherent advantages, considering the dubious ontological status 
of the wave function.)

We also recognize that the density matrix in QM is not conserved,
but my proposal doesn't identify the "quantity" with a density matrix.
It begins with a classical "conserved" quantity, like angular momentum,
which on the level of individual quantum interactions is not strictly 
conserved (in the sense that different parts of an "entangled" system 
may interact with "non-parallel" measurements, so the net measured 
quantities don't fully cancel).  QM predicts a certain distribution 
for the results of measurements (over identically prepared systems) 
for whatever measurements we might choose to make.  Thus, QM gives 
predictions over the entire "space" of possible interactions that 
the system could encounter.

The hypothesis (confirmed in the case of simple EPRB situations) is
that if we evaluate the expected net "residual" un-balanced sum of
the predicted results of measurements of the particular quantity 
(e.g., momentum) uniformly over the space of possible interactions, 
the QM predictions (among all possible joint distributions) yield 
the minimum possible net un-balance.

Admittedly there are a number of ambiguities in this statement.
For example, it's easy to say what "uniformly" means for the space
of possible measurements in a simple EPRB experiment (where all
measurements are simple angles about a common axis), but in more
complicated situations it may be less clear how to define "equi-
probable measurements".  Also, one can imagine counter-examples
to the claim of minimization, based on degenerate laws (e.g., every
measurement of every kind always gives a null result), so in order
to claim the QM relations are variational minimums we would need to
stipulate some non-degenerate structure.

On a trivial level, one could challenge even the simple EPRB result 
by saying that if nature really wanted to minimize the net un-balanced 
spin it would simply have created all particles without spin, so 
they'd glide through a Stern-Gerlach apparatus undisturbed.  On the 
other hand, this is sort of in the category of "who ordered that?", 
i.e., QM itself doesn't predict all the structure on which it 
operates, including the masses of particles, etc.  True, there are 
differing opinions on whether (relativistic) QM "predicts" spin-1/2
particles, but it certainly doesn't predict the entire standard model.  
It goes without saying that our theories rely on quite a bit of 
seemingly arbitrary structure.

It might be argued that this approach can give, at best, only a
partial account of the state of an entangled system, but this isn't
obvious to me.  The constraint of minimizing the divergence of 
conserved quantities *over the entire space of possible measurements/
interactions* is actually quite strong.  Remember this is the space 
of all possible *joint* measurements.  Admittedly it might be 
exceedingly difficult to evaluate in complicated situations, but it
could well be a sufficient constraint to fully define the system.

It is sometimes said that the existence of entanglement in quantum
mechanics is "explained" by the fact that a function f(x,y) of two 
variables need not be the product g(x)h(y) of two functions of the 
individual variables separately.  One could debate the extent to which
this really constitutes an explanation for why entanglement exists, 
or merely a description or characterization of entanglement.  I see 
it more as the latter, but then I'm not a good judge of explanations 
for "why X exists" for any X.  Even if it could be shown that the laws
of QM can be derived in the way I've suggested (with some suitable 
definition of uniformity over the space of all possible measurements,
etc), I'd still be reluctant to claim that it explains *why* 
entanglement exists.  On the other hand, since it implies that 
entanglement serves to minimize certain functions related to classical
conserved quantities, it could be seen as having some kind of 
explanatory value.

Return to MathPages Main Menu
Сайт управляется системой uCoz