Bohm and Beability

I write this being of sound mind and judgement … etc., etc., …

At this stage of life a dude like me can enter a debate about the foundations of quantum mechanics with little trepidation. There is a chance someone will put forward proposals that are just too technically difficult to understand, but there is a higher chance of getting either something useful out of the debate or obtaining some amusement and hilarity. The trick is to be a little detached and open-minded while retaining a decent dose of scepticism.

Inescapable Non-locality

Recently I was watching a lecture by Sheldon Goldstein (a venerable statesman of physics) who was speaking about John Stewart Bell’s contributions to the foundations of quantum mechanics. Bell was, like Einstein, sceptical of the conventional interpretations that gave either too big a role for “observers” and the “measurement process” or swept such issues aside by appealing to Many Worlds or some other fanciful untestable hypotheses.

What Bell ended up showing was a theory for a class of experiments that would prove the physics of our universe is fundamental non-local. Bell was actually after experimental verification that we cannot have local hidden variable theories. Hidden variables being things in physics that we cannot observe. Bell hated the idea of unobservable physics (and Einstein would have agreed, (me too, but that’s irrelevant)). The famous “Bell’s Inequalities” are a set of relations referring to experimental results that will give clear different numbers for outcomes of experiments if our universe’s physics is inherently non-local, or classical-with-hidden-variables.  The hidden variables are used to model the weirdness of quantum mechanics.

Hidden variable theories attempt to use classical physics, and possibly strict locality (no signals going faster than light, and even no propagation of information faster than light) to explain fundamental physical processes. David Bohm came up with the most complete ideas for hidden variables theories, but his, and all subsequent attempts, had some very strange features that seemed to be always needed in order to explain the results of the particular types of experiments that John Bell had devised. In Bohm’s theories he uses a feature called a Pilot Wave, which is an information carrying wave that physicists can only indirectly observe via it’s influence on experimental outcomes. We only get to see the statistics and probabilities induced by Bohm’s pilot waves. They spread out everywhere and they thus link space-like separated regions of the universe between which no signals faster than light could ever travel between. This has the character of non-locality but without requiring relativity violating information signalling faster than light, so the hope was one could use pilot waves to get a local hidden variables theory that would agree with experiments.

Goldstein tells us that Bell set out to show it was impossible to have a local hidden variables theory, but he ended up showing you could not have any local theory — at all! — all theories have to have some non-locality. Or rather, what the Bell Inequalities ended up proving (via numerous repeated experiments which measured conformance to the Bell inequalities) was that the physics in our universe could never be local, whatever theory one devises to model reality it has to be non-local. So it has to have some way for information to get from one region to another faster than light.

That is what quantum mechanics assumes, but without giving us any mechanism to explain it. A lot of physicists would just say, “It’s just the way our world is”, or they might use some exotic fanciful physics, like Many Worlds, to try to explain non-locality.

History records that Bell’s theorems were tested in numerous types of experiments, some with photons, some with electrons, some with entire atoms, and all such experiments have confirmed quantum mechanics and non-locality and have dis-proven hidden variables and locality. For the record, one may still believe in hidden variables, but the point is that if even your hidden variables theory has to be non-local then you lose all the motivation for believing in hidden variables. Hidden variables were designed to try to avoid non-locality. That was almost the only reason for postulating hidden variables. Why would you want to build-in to the foundations of a theory something unobservable? Hidden variables were a desperation in this sense, a crazy idea designed to do mainly just one thing — remove non-locality. So Bell and the experiments showed this project has failed.

ER.EPR_JohnStewartBell_CERN_1982

I like this photo of Bell from CERN in 1982 because it shows him at a blackboard that has a Bell Inequality calculation for an EPR type set-up. (Courtesy of : Christine Sutten, CERN https://home.cern/about/updates/2014/11/fifty-years-bells-theorem)

Now would you agree so far?  I hope not.  Hidden variables are not too much more crazy then any of the “standard interpretations” of quantum mechanics, of which there are a few dozen varieties, all fairly epistemologically bizarre.  Most other interpretations have postulates that are considerably more radical than hidden variables postulates. Indeed, one of the favourable things about a non-local hidden variables theory is that it would give the same predications as quantum mechanics but without a terribly bizarre epistemology.  Nevertheless, HV theories have fallen out of favour because people do not like nature to have hidden things that cannot be observed.  This is perhaps an historical prejudice we have inherited from the school of logical positivism, and maybe for that reason we should be more willing to give it up!  But the prejudice is quite persistent.

Quantum Theory without Observers

Goldstein raises some really interesting points when he starts to talk about the role of measurement and the role of observers. He points out that physicists are mistaken when they appeal to observers and some mysterious “measurement process” in their attempts to rectify the interpretations of quantum mechanics. It’s a great point that I have not heard mentioned very often before. According to Goldstein, a good theory of physics should not mention macroscopic entities like observers or measurement apparatus, because such things should be entirely dependent upon—and explained by—fundamental elementary processes.

This demand seems highly agreeable to me. It is a nice general Copernican principle to remove ourselves from the physics needed to explain our universe. And it is only a slightly stronger step to also remove the very vague and indiscreet notion of “measurement”.

The trouble is that in basic quantum mechanics one deals with wave functions or quantum fields (more generally) that fundamentally cannot account for the appearance of our world of experience. The reason is that these tools only give us probabilities for all the various ways things can happen over time, we get probabilities and nothing else from quantum theory. What actually happens in time is not accounted for by just giving the probabilities. This is often a called the “Measurement Problem” of quantum mechanics. It is not truly a problem. It is a fundamental incompleteness. The problem is that standard quantum theory has absolutely no mechanism for explaining the appearance of classical reality that we observe.

So this helps explain why a lot of quantum interpretation philosophy injects the notions of “observer” and “measurement” into the foundations of physics. It seems to be necessary for proving an account of the real semi-classical appearance of our world. We are not all held in ghostly superpositions because we all observe and “measure” each other, constantly. Or maybe our body cells are enough, they are “observing each other” for us? Or maybe a large molecule has “observational power” and is sufficient? Goldstein, correctly IMHO, argues this is all bad philosophy. Our scientific effort should be spent on trying to complete quantum theory or find a better more complete theory or framework for fundamental physics.

Here’s Goldstein encapsulating this:

It’s not that you don’t want observers in physics. Observers are in the real world and physics better account for the fact that there are observers. But observers, and measurement, and vague notions like that, and, not just vague, even macroscopic notions, they just seem not to belong in the very formulation of what could be regarded as a fundamental physical theory.

There should be no axioms about “measurement”. Here is one passage that John Bell wrote about this:

The concept of measurement becomes so fuzzy on reflection that it is quite surprising to have it appearing in physical theory at the most fundamental level. … Does not any analysis of measurement require concepts more fundamental than measurement? And should not the fundamental theory be about these more fundamental concepts?

Rise of the Wormholes

I need to explain one more set of ideas before making the note for this post.

There is so much to write about ER=EPR, and I’ve written a few posts about ER=EPR so far, but not enough. The gist of it, recall, is that the fuss in recent decades over the “Black Hole Information Paradox” or the “Black Hole Firewall” have been incredibly useful in leading a group of theoreticians towards a basic dim inchoate understanding that the non-locality in quantum mechanics is somehow related to wormhole bridges in spacetime.  Juan Maldacena and Leonard Susskind have pioneered this approach to understanding quantum information.

A lot of the weirdness on quantum mechanics turns out to be just geometry and topology of spacetime.

The “EPR”=”Einstein-Podolsky-Rosen-Bohm thought experiments”, precisely the genesis of the ideas that John Bell devised his Bell Inequalities for testing quantum theory, and which prove that physics involves fundamentally non-local interactions.

The “ER=”Einstein-Rosen wormhole bridges”. Wormholes are a science fiction device for time travel or fast interstellar travel. The idea is that you might imagine creating a spacetime wormhole by pinching off a thread of spacetime like the beginnings of a black hole, but then reconnecting the pinched end somewhere else in space, maybe a long time or distance separation away, and keep the pinched end open at this reconnection region.  So you can make this wormhole bridge a space length or time interval short-cut between two perhaps vastly separated regions of spacetime.

It seems that if you have an extremal version of a wormhole that is essentially shrunk down to zero radius, so it cannot be traversed by any mass, then this minimalistic wormhole still acts as a conduit of information. These provide the non-local connections between spacelike separated points in spacetime. Basically the ends of the ER=EPR wormholes are like particles, and they are connected by a wormhole that cannot be traversed by any actual particle.

Entanglement and You

So now we come to the little note I wanted to make.

I agree with Goldstein that we aught not artificially inject the concept of an observer or a “measurement process” into the heart of quantum mechanics. We should avoid such desperations, and instead seek to expand our theory to encompass better explanations of classical appearances in our world.

The interesting thing is that when we imagine how ER=EPR wormholes could influence our universe, by connecting past and future, we might end up with something much more profound than “observers” and “measurements”. We might end up with an understanding of how human consciousness and our psychological sense of the flow of time emerges from fundamental physics. All without needing to inject such transcendent notions into the physics. Leave the physics alone, let it be pristine, but get it correct and then maybe amazing things can emerge.

I do not have such a theory worked out. But I can give you the main idea. After all, I would like someone to be working on this, and I do not have the time or technical ability yet, so I do not want the world of science to wait for me to get my act together.

First: it would not surprise me if, in future, a heck of a lot of quantum theory “weirdness” was explained by ER=EPR like principles. If you abstract a little and step back from any particular instance of “quantum weirdness”, (like wave-particle duality or superposition or entanglement in any particular experiment) then what we really see is that most of the weirdness is due to non-locality. Now, this might take various guises, but if there is one mechanism for non-locality then it is a good bet something like this mechanism is at work behind most instances of non-locality that arise in quantum mechanics.

Secondly: the main way in which ER=EPR wormholes account for non-local effects is via pure information connecting regions of spacetime via the extremal wormholes. And what is interesting about this is that this makes a primitive form of time travel possible. Only information can “time travel” via these wormholes, but that might be enough to explain a lot of quantum mechanics.

Thirdly: although it is unlikely time travel effects can ever propagate up to macroscopic physics, because we just cannot engineer large enough wormholes, the statistical effects of the minimalistic ER+EPR wormholes might be enough to account for enough correlation between past and future that we might be able to eventually prove, in principle, that information gets to us from our future, at least at the level of fundamental quantum processes.

Now here’s the more speculative part: I think what might emerge from such considerations is a renewed description of the old Block Universe concept from Einstein’s general relativity (GR). Recall, in GR, time is more or less placed on an equal theoretical footing to space. This means past and future are all connected and exist whether we know it or not. Our future is “out there in time” and we just have not yet travelled into it. And we cannot travel back to our past because the bridges are not possible, the only wormhole bridges connecting past to future over macroscopic times are those minimal extremal ER=EPR wormholes that provide the universe with quantum entanglement phenomena and non-locality.

So I do not know what the consequences of such developments will be. But I can imagine some possibilities. One is that although we cannot access our future, or travel back to our past, the information from such regions in the Block Universe are tenuously connected to us nonetheless. Such connections are virtually impossible for us to exploit usefully because we could never confirm what we are dealing with until the macroscopic future “arrives” so to speak.  So although we know it is not complete, we will still have to end up using quantum mechanics probability amplitude mathematics to make predictions about physics.  In other words, quantum mechanics models our situation with respect to the world, not the actual state of the world from an atemporal Block Universe perspective.  It’s the same problem with the time travel experiment conducted in 1994 in the laboratory under the supervision of Günter Nimtz, whose lab sent analogue signals encoding Mozart’s 40th Symphony into the future (by a few milliseconds).

For that experiment there are standard explanations using Maxwell’s theory of electromagnetism that show no particles travel faster than light into the future. Nevertheless, Nimtz’s laboratory got a macroscopic recording of bits of information from Mozart’s 40th Symphony out of one back-end of a tunnelling apparatus before it was sent into the front-end of the apparatus. The interesting thing to me is not about violation of special relativity or causality.  (You might think the physicists could violate causality because one of them could wait at the back-end and when they hear Mozart come out they could tell their colleague to send Beethoven instead, thus creating a paradox.  But they could not do this because they could not send a communication fast enough in real time to warn their colleague to send Beethoven’s Fifth instead of Mozart.)  Sadly that aspect of the experiment was the most controversial, but it was not the most interesting thing. Many commentators argued about the claimed violations of SR, and there are some good arguments about photon “group velocity” being able to transmit a signal faster than light without any particular individual photon needing to go faster than light.

(Actually many of Nimtz’s experiments used electron tunnelling, not photon tunnelling, but the general principles are the same.)

All the “wave packet” and “group velocity” explanations of Nimtz’s time travel experiments are, if you ask me, merely attempts to reconcile the observations with special relativity. They all, however, use collective phenomena, either waves, or group packets. But we all know photons are not waves, they are particles (many still debate this, but just bear out my argument). The wave behaviour of fundamental particles is in fact a manifestation of quantum mechanics. Maxwell’s theory is, thus, only phenomenological. It describes electromagnetic waves, and photons get interpreted (unfortunately) as modes of such waves. But this is mistaken. Photons collectively can behave as Maxwell’s waves, but Maxwell’s theory is describing a fictional reality. Maxwell’s theory only approximates what photons actually do. They do not, in Maxwell’s theory, impinge on photon detectors like discrete quanta. And yet we all know this is what light actually does! It violates Maxwell’s theory every day!

So what, I think, is truly interesting about Nimtz’s experiments is that they were sensitive enough to give us a window into wormhole traversal. Quantum tunnelling is nothing more than information traversal though ER=EPR type wormholes. At least that’s my hypothesis. It is a non-classical effect, and Maxwell’s theory only accounts for it via the fiction that photons are waves. A wrong explanation can often fully explain the facts of course!

Letting Things Be

What Goldstein, and Bohm, and later John Stewart Bell wanted to do is explain the world. They knew quantum field theory does not explain the world. It does not tell us why things come to be what they are. Why a measurement pointer ends up pointing in particular direction rather than any one of the other superposed states of pointer orientation the quantum theory tells us it aught to be in.  Such outcomes or predictions are what David Bohm referred to as “local Beables”.  Goldstein explains more in his seminar: John Bell and the Foundations of Quantum Mechanics” Sesto, Italy 2014, (https://www.youtube.com/watch?v=RGbpvKahbSY).

My favourite idea, one I have been entertaining for over twenty years, in fact ever since 1995 when I read Kip Thorne’s book about classical general relativity and wormholes, is that the wormholes (or technically “closed timelike curves”) are where all the ingredients are for explaining quantum mechanics from a classical point of view. Standard twentieth century quantum theory does not admit wormholes. But if you ignore quantum theory and start again from classical dynamics, but allow ER=EPR wormholes to exist, then I think most of quantum mechanics can be recovered without the need for un-explained axiomatic superpositions and wave-function collapse (the conventional explanation for “measurements” and classical appearances). In other words, quantum theory, like Maxwell’s EM theory, is only a convenient fictional model of our physics. You see, when you naturally have information going backwards and forwards in time you cannot avoid superpositions of state. But when a stable time-slice emerges or “crystallizes” out of this mess of acausal dynamics, then it should look like a measurement has occurred. But no such miracle happens, it simply emerges or crystallizes naturally from the atemporal dynamics. (I use the term “crystallize” advisedly here, it is not a literal crystallization, but something abstractly similar, and George Ellis uses it in a slightly different take on the Block Universe concept, so I figure it is a fair term to use).

Also, is it possible that atemporal dynamics will tend to statistically “crystallize” something like Bohm’s pilot wave guide potential.  If you know a little about Bohmian mechanics you know the pilot wave is postulated as a real potential, something that just exists in our universe’s physics.  Yet is has no other model alike, it is not a quantum field, it is not a classical filed, it is what it is.  But what if there is no need for such a postulate?  How could it be avoided?  My idea is that maybe the combined statistical effects of influences propagating forward and backward in time give rise to an effective potential much like the Bohm pilot wave or Schrödinger wave function.  Either way, both constructs in conventional or Bohmian quantum mechanics might be just necessary fictions we need to describe, in one way or another, the proper complete Block Universe atemporal spacetime dynamics induced by the existence of spacetime wormholes.  I could throw around other ideas, but the main one is that wormholes endow spacetime with a really gnarly stringy sort of topology that has, so far, not been explored enough by physicists.

Classically you get non-locality when you allow wormholes. That’s the quickest summary I can give you. So I will end here.

*      *       *


CCL_BY-NC-SA(https://creativecommons.org/licenses/by-nc-sa/4.0/legalcode)

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s