Banner
    Why Einstein Could Not Solve The EPR Paradox Though He Could Have
    By Sascha Vongehr | August 11th 2011 12:27 AM | 8 comments | Print | E-mail | Track Comments
    About Sascha

    Dr. Sascha Vongehr [风洒沙] studied phil/math/chem/phys in Germany, obtained a BSc in theoretical physics (electro-mag) & MSc (stringtheory)...

    View Sascha's Profile

    Usually, the Einstein-Podolsky-Rosen paradox is presented as if it potentially conflicts with the theory of relativity. This is because the correlation between Alice’s and Bob’s measurements seems to travel with superluminal speeds in one real world. The solution [1] of the EPR paradox shows that this view is upside-down, which is the reason it took so long to solve it satisfactorily.


    Scientists and philosophers where trying to understand the quantum mechanics involved without bothering with relativity, because the problematic seems to conflict with relativity. However, in that way, the actual solution looks suspect:


    A non-relativistic universe would need to quantum split everywhere into many different ones all the time (every moment, infinite universes infinitely often). Why the hell should it do so? What makes it split? How could this ever be later turned into a relativistic, let alone general relativistic model where the splits would have to have a certain shape through curved space-time? For all these and more reasons, the many worlds concept is often ridiculed.



    What has been missed consistently by the community interested in the philosophy of physics, partially due to its obsession with "hyperspace foliations" (~ slices of the real world out there), is that special relativity already deconstructs the world into a collection of different observers’ past light cones (~histories, memory contents, minds) in a sort of ‘temporal modal realism’.


    Everett relativity in the EPR setup does not conflict with special relativity; on the contrary, it is suspect without special relativity and natural with it. Branching only needs to occur at the observation events. The funny thing is: This really is already obvious from special relativity basically since 1915. If you understand special relativity, there are only two options:


    1)You either believe in one determined “block universe” where everything is predetermined, which is ridiculous lest you think that the very fundament of nature and the whole universe is there to let you on some Tuesday afternoon throw a coin and get heads instead of tails,

    2) or you know that totality must in some way ultimately be describable by all the possible past light cones, i.e. once you are on a truly fundamental level, the you that throws tails is exactly as real as the one that gets heads, and the difference between those two propagates at most with the speed of light through the model that describes spatial relations (the universe).


    If Einstein would have been less of a direct realist, he could have been twice the genius he was already and come up with Everett-relativity long before Everett. I know, this is misleading; Einstein’s, Feynman’s, and many other physicist's successes are due to them being originally direct realists*, because that makes you focus on “real stuff” like elevators falling in gravity fields.


    Nevertheless, my point is that taking special relativity seriously renders the many world concept natural. For example the question about where the splits of the world branchings are supposed to be located becomes trivial: Along the light cones of course. If you know special relativity, you then also know that the “split” is thereby of zero eigen-length and happens in a sense (I said: in a sense!) instantaneously.


    Quantum decoherence “dislocates” [2] via interactions and therefore at the speed of light. The model that resolves the EPR paradox works therefore without superluminal velocities. The last step that turns the model quantum physical is a local branching that destroys the very grounds on which absolute actualization makes sense. Einstein locality stays; realism is modified.


    Similar conclusions have been drawn before. The Heisenberg representation of the many world interpretation is local [3], but there are no models. The simplicity of the new model and the fact that a single, local modification turns it into quantum physics while destroying its realism shows that not every many-worlds model is a quantum world and quantum physics is not synonymous with multiverses or modal realism.

    This further corroborates that the many-world aspect of the universe, which is philosophically of course self-evident, should be understood as a relativistic rather than quantum physical phenomenon!

    The gist is that relativity indicates modal realism and that quantum = necessarily modal! The main point is not that the world is local. The model is local! I do not care about locality much, because only if space is viewed as being ‘out there’ does it even make sense to defend its locality. Once the world is not out there anyway (more in our heads, in a sense, with much poetic license), it does not matter whether its consistent description involves locality or non-locality, or whether the world is best consistently described as made out of green cheese!

    ---------------------------------------

    * This is an important aspect that naturally also hinders physicists from progressing. Without realism, you are a lousy physicist right from the start, but if you cannot shake realism along the way, you are going to be a lousy physicist in the end.


    [1] S. Vongehr: "Many Worlds Model resolving the Einstein Podolsky Rosen paradox via a Direct Realism to Modal Realism Transition that preserves Einstein Locality." arXiv:1108.1674v1 [quant-ph] (2011). UPDATE: This reference is the first paper on the possibility of such models, but the models have now actually been constructed and are much better explained in S. Vongehr: “Against Absolute Actualization: Three "Non-Localities" and Failure of Model-External Randomness made easy with Many-Worlds Models including Stronger Bell-Violation and Correct QM Probability” http://arxiv.org/abs/1311.5419 (2013)


    [2] H. Dieter Zeh: “Quantum discreteness is an illusion.” arxiv:0809.2904


    [3] David Deutsch, Patrick Hayden: “Information Flow in Entangled Quantum Systems.” Proc. R. Soc. London A456, 1759-1774 (1999)

    Comments

    Steve Davis
    "Without direct realism, you are a lousy physicist right from the start, but if you cannot shake off direct realism along the way, you are going to be a lousy physicist in the end."
    Zen physics?!
    I like it!
    Brian Greene's new book says that the many worlds interpretation does not work. There is a long history of other physicists rejecting it as well. Can you explain where they go wrong?

    vongehr
    Many worlds interpretation (MWI) with branch counting has several potentially fatal problems, one being the normalization of infinities (so called cosmological measure problem). The final aim consistent with what is philosophically indicated is a many minds description and a rejection of objective (rather than subjective) weights (like world counts). The multiverse structure being real (modal realism) is still realist. I am sticking here temporarily with MWI because it is sufficient for the purpose (here conserving locality while having modal realism be responsible for the quantum nature) and because people are just not ready for many minds (many worlds seems to be already too weird for most, though I do not understand why).

    Think of my use of MWI as many minds diet version. On a blog with a largely lay audience, counting stuff is OK, but probabilities as rational expectations ... no way one could reach anybody.
    Roger, you said "Brian Greene's new book says that the many worlds interpretation does not work." I assume you are referring to 'The Hidden Reality.' I was wondering where in the book you see Greene as rejecting MWI. I read it as well and recall him seeming supportive of the interpretation.

    Thanks
    elo

    My 64Million$$ question is the following. Does this Finally put to rest the question whether or not Decoherence solves the collapse of the wavefunction paradox ? 10 yrs ago, Stephen Adler argued, in opposition to the community, that decoherence does Not. At first sight I like Sasha's idea, just wondering if it will resolve this paradox to the satisfaction of everyone, or Only w/in the context of ManyWorlds ?

    vongehr
    I think that decoherence is by now so experimentally and theoretically established that even if quantum mechanics should get non-linear corrections (due to say gravity or consciousness in some sense), those corrections would come at another scale and not replace decoherence but at most add to it.
    Now where do we get those 64 million in case we put it to rest I wonder.
    Given your references to Deutsch and Hayden's paper, I was wondering if you had seen Deutsch's new paper in this regard:

    http://arxiv.org/abs/1109.6223

    And, what your initial thoughts might be. Thanks.

    vongehr
    Now I have, thanks. Initial thoughts? Perhaps I am missing something profound, but this forth and back paper writing about whether something is non-local (Wallace) or rather global (Deutsch) based on operator sophistry in standard QM puts me asleep. Likely the Oxford crowd is correct in defending linear QM and it will survive unification with gravity, but if not, their arguments will all need to be looked at again, or in other words, it isn't clear whether their assumptions are correct as long as we do not know the nature of gravity. I try to stay with arguments that hold up regardless.