A Plain Simple Lecture — Non-ergodic , but … satisfying

There is another talk from the Philosophy of Cosmology Conference in Tenerife 2014 that is in a similar league to Joel Primack’s awesome display of the Bolshoi Simulations of dark matter structure. Only this one I will write about tonight is pretty much words and equations only. No pretty pictures. But don’t let that dissuade you from enjoying the talk by Bob Wald on Gravity and Thermodynamics.

Most physics students might only know Robert Wald from his famous textbook on General Relativity.

Aside: While searching for a nice picture to illuminate this post I came across a nice freehand SVG sketch of Shaun Maguire’s. He’s a postdoc at Caltech and writes nicely in a blog there: Quantum Frontiers. If you are more a physics/math geek than a philosophy/physics geek then you will enjoy his blog. I found it very readable, not stunning poetic prose, but easy-going and sufficiently high on technical content to hold my interest.

blackhole_thermodynamics_RindlerQuest2

Says Maguire, “I’ve been trying to understand why the picture on the left is correct, even though my intuition said the middle picture should be (intuition should never be trusted when thinking about quantum gravity.)” Source: http://quantumfrontiers.com/2014/06/20/ten-reasons-why-black-holes-exist/

That has to do with black hole firewalls, which digresses away from Wald’s talk.

It is not true to say Wald’s talk is plain and simple, since the topic is advanced, only a second course on general relativity would cover the details. And you need to get through a lot of mathematical physics in a first course of general relativity. But what I mean is that Wald is such a knowledgeable and clear thinker that he explains everything crisply and understandably, like a classic old-school teacher would. It is not flashy, but damn! It is tremendously satisfying and enjoyable to listen to. I could hit the pause button and read his slides then rewind and listen to his explanation and it just goes together so sweetly. He neither repeats his slides verbatim, not deviates from them confusingly. However, I think if I were in the audience I would be begging for a few pauses of silence to read the slides. So the advantage is definitely with the at-home Internet viewer.

Now if you are still reading this post you should be ashamed! Why did you not go and download the talk and watch it?

I loved Wald’s lucid discussion of the Generalised Second Law (which is basically a redefinition of entropy, which is that generalised entropy should be the sum of thermodyanmics entropy plus black hole entropy or black hole surface area.)

Then he gives a few clear arguments that provide strong reasons for regarding the black hole area formula as equivalent to an entropy, one of which is that in general relativity dynamic instability is equivalent to thermodynamic instability, hence the link between the dynamic process of black hole area increase is directly connected to black hole entropy. (This is in classical general relativity.)

But then he puts the case that the origin of black hole entropy is not perfectly clear, because black hole entropy does not arise out of the usual ergodicity in statistical mechanics systems, whereby a system in an initial special state relaxes via statistical processes towards thermal equilibrium. Black holes are non-ergodic. They are fairly simple beasts that evolve deterministically. “The entropy for a black hole arises because it has a future horizon but no past horizon,” is how Wald explains it. In other words, black holes do not really “equilibrate” like classical statistical mechanics gases. Or at least, they do not equilibrate to a thermal temperature ergodically like a gas, they equilibrate dynamically and deterministically.

Wald’s take on this is that, maybe, in a quantum gravity theory, the detailed microscopic features of gravity (foamy spacetime?) will imply some kind of ergodic process underlying the dynamical evolution of black holes, which will then heal the analogy with statistical mechanics gas entropy.

This is a bit mysterious to me. I get the idea, but I do not see why it is a problem. Entropy arises in statistical mechanics, but you do not need statistically ergodic processes to define entropy. So I did not see why Wald is worried about the different equilibration processes viz. black holes versus classical gases. They are just different ways of defining an entropy and a Second Law, and it seems quite natural to me that they therefore might arise from qualitatively different processes.

But hold onto you hats. Wald next throws me a real curve ball.

Smaller then the Planck Scale … What?

Wald’s next concern about a breakdown of the analogy between statistical gas entropy and dynamic black hole entropy is a doozie. He worries about the fact the vacuum fluctuations in a conventional quantum field theory are basically ignored in statistical mechanics, yet they cannot (or should not?) be ignored in general relativity, since, for instance, the ultra-ultra-high energy vacuum fluctuations in the early universe get red-shifted by the expansion of the universe into observable features we can now measure.

Wald is talking here about fluctuations on a scale smaller than the Planck length!

To someone with my limited education you begin by thinking, “Oh, that’s ok, we all know (one says knowingly not really knowing) that stuff beyond the Plank scale is not very clearly defined and has this sort of ‘all bets are off’ quality about it. So we do not need to worry about it yet until there is a theory covering the Planck scale.”

But if I understand it correctly, what Wald is saying is that what we see in the cosmic background radiation, or maybe in some other observations (Wald is not clear on this), corresponds to such red shifted modes, so we literally might be seeing fluctuations that were originated on a scale smaller than the Planck length if we probe the cosmic background radiation to highly ultra-red shifted wavelengths.

That was a bit of an eye-opener for me. I was previously not aware of any physics that potentially probed beyond the Planck scale. I wonder if anyone else thought this is surprising? Maybe if I updated my physics education I’d find out that it is not so surprising.

In any case, Wald does not discuss this, since his point is about the black hole case where at the black hole horizon a similar shifting of modes occurs with ultra-high energy vacuum fluctuations near the horizon getting red shifted far from the black hole into “real” observable degrees of freedom.

Wald talks about this as a kind of “creation of new degrees of freedom”. And of course this does not occur in statistical gas mechanics where there are a fixed number of degrees of freedom, so again the analogy he wants between black hole thermodynamics and classical statistical mechanics seems to break down.

There is some cool questioning going on here though. The main problem with the vacuum fluctuations Wald points out is that one does not know how to count the states in the vacuum. So the implicit idea there, which Wald does not mention, is that maybe there is a way to count states of the vacuum, which might then heal the thermodynamics analogy Wald is pursuing. My own (highly philosophical, and therefore probably madly wrong) speculation would be that quantum field theory is only an effective theory, and that a more fundamental theory of physics with spacetime as the only real field and particle physics states counted in a background-free theory kind of way, might, might yield some way of calculating vacuum states.

Certainly, I would imagine that if field theory is not the ultimate theory, then the whole idea of vacuum field fluctuations gets called into suspicion. The whole notion of a zero-point background field vacuum energy becomes pretty dubious altogether if you no longer have a field theory as the fundamental framework for physics. But of course I am just barking into the wind hoping to see a beautiful background-free framework for physics.

Like the previous conundrum of ergodicity and equilibration, I do not see why this degree of freedom issue is a big problem. It is a qualitative difference which breaks the strong analogy, but so what? Why is that a pressing problem? Black holes are black holes, gases are gases, they ought to be qualitatively distinct in their respective thermodynamics. The fact there is the strong analogy revealed by Bekenstein, Hawking, Carter, and others is beautiful and does reveal general universality properties, but I do not see it as an area of physics where a complete unification is either necessary or desired.

What I do think would be awesome, and super-interesting, would be to understand the universality better. This would be to ask further (firstly) why there is a strong analogy, and (secondly) explain why and how it breaks down.

*      *       *

This post was interrupted by an apartment moving operation, so I ran out of steam on my consciousness stream, so will wrap it up here.

*      *       *


CCL_BY-NC-SA(https://creativecommons.org/licenses/by-nc-sa/4.0/legalcode)

Advertisements

“Nothing to Hide” Arguments and Generalisations

A good friend of mine re-posted a link on Google+ the other day: I’ve Got Nothing to Hide” and Other Misunderstandings of Privacy, by Daniel J. Solove. It’s not a bad read, so go check it out.

free_AaronSwartz_journals

A great cartoon of Aaron giving everyone access to JSTOR. Whoever drew this needs crediting, but I can only make out their last name “Pinn”. I grabbed this from Google image search. So thanks Mr or Ms Pinn.

So that this is not a large departure from my recent trend in blog topics, I wanted to share a few thoughts about similar “easy arguments” in quite different fields.

The “Nothing to Show” Argument Against Publishing

This is an argument I’ve used all my life to avoid publishing. I hate people criticising my work. So I normally tell supervisors or colleagues that I have nothing of interest to publish. This is an extraordinary self-destructive thing to do in academia, it basically kills one’s career. But there are a few reasons I do not worry.

Firstly, I truly do not like publishing for the sake of academic advancement. Secondly, I have a kind of inner repulsion against publishing anything I think is stupid or trivial or boring. Thirdly, I am quite lazy, and if I am going to fight to get something published it should be worth the fight, or should be such good quality work that it will not be difficult to publish somewhere. Fourth, I dislike being criticised so much I will sometimes avoid publishing just to avoid having to deal with reviewer critiques. That’s a pretty immature and childish sensitivity, and death for an academic career, but with a resigned sigh I have to admit that’s who I am, at least for now, a fairly childish immature old dude.

There might be a few other reasons. A fifth I can think of is that I wholeheartedly agree with Aaron Swartz’s Guerilla Open Access Manifesto, which proclaims the credo of free and open access to publicly funded research for all peoples of all nations. That’s not a trivial manifesto. You could argue that the public of the USA funds research that should then be free and open, but only to the public of the USA, and likewise for other countries. But Swartz was saying that the tax payers of the respective countries have already paid for the research, the researcher’s have been fully compensated, and scientists do not get any royalties from journal articles anyway, and therefore their research results should be free for all people of all nations to use. Why this is important is the democratising of knowledge, and perhaps more importantly the unleashing of human potential and creativity. If someone in Nigeria is denied access to journals in the USA then that person is denied the chance to potentially use that research and contribute to the sum total of human knowledge. We should not restrict anyone such rights.

OK, that was a bit of a diversion. The point is, I would prefer to publish my work in open-access journals. I forget why that’s related to my lack of publishing … I did have some reason in mind before I went on that rant.

I’ve read a lot of total rubbish in journals, and I swear to never inflict such excrement on other people’s eyes. So anything I publish would be either forced by a supervisor, or will be something I honestly think is worth publishing, something that will help to advance science. It is not out of pure altruism that I hesitate to publish my work, although that is part of it. The impulse against publishing is closer to a sense of aesthetics. Not wanting to release anything in my own name that is un-artful. I’m not an artist, but I have been born or raised with an artistic temperament, much to my detriment I believe. Artless people have a way of getting on much better in life. But there it is, somewhere in my genes and in my nurturing.

So I should resolve to never use the “Nothing to Show” argument. I have to get my research out in the open, let it be criticised, maybe some good will come of it.

The “Nothing to Fear” Argument Against Doing Stupid Stuff

Luckily I am not prone to this argument. If you truly have nothing to fear, then by all means … but often this sort of argument means you personally do not mind suffering whatever it is that’s in store, and that use of the argument can be fatal. So if you ever hear you inner or outer voice proclaiming “I have nothing to fear …” then take a breath and pause, make sure there truly is nothing to fear (but then, why would you be saying this out loud?). There is not much more to write about it. But feel free to add comments.

The “Nothing to Lose” Argument in Favour of Being Bold

This is normally a very good argument and perhaps the best use of the “Nothing to …” genre. If you truly have nothing to lose then you are not confounding this with the “Nothing to Fear” stupidity. So what more needs to be said?


CCL_BY-NC-SA(https://creativecommons.org/licenses/by-nc-sa/4.0/legalcode)

Primacks’ Premium Simulations

After spending a week debating with myself about various Many Worlds philosophy issues  and other quantum cosmology questions, today I saw Joel Primack’s presentation at the Philosophy of Cosmology International Conference, on the topic of Cosmological Structure Formation. And so for a change I was speechless.

Thus I doubt I can write much that illumines Primack’s talk better than if I tell you just to go and watch it.

He, and colleagues, have run supercomputer simulations of gravitating dark matter in our universe. From their public website Bolshoi Cosmological Simulations they note: “The simulations took 6 million cpu hours to run on the Pleiades supercomputer — recently ranked as seventh fastest of the world’s top 500 supercomputers — at NASA Ames Research Center.”

To get straight to all the videos from the Bolshoi simulation go here (hipacc.ucsc.edu/Bolshoi/Movies.html).

cosmos_BolshoiSim_MD_cluster01_gas_sn320

MD4 Gas density distribution of the most massive galaxy cluster (cluster 001) in a high resolution resimulation, x-y-projection. (Kristin Riebe, from the Bolshoi Cosmological Simulations.)

The filamentous structure formation is awesome to behold. At times they look like living cellular structures in the movies that Primack has produced. Only the time steps in his simulations are probably about 1 million year steps. for example, on simulation is called the Bolshio-Planck Cosmological Simulation — Merger Tree of a Large Halo. If I am reading this page correctly these simulations visualize 10 billion Sun sized halos.  The unit they say they resolve is “1010 Msun halos”. Astronomers will often use a symbol M to represent a unit of one solar mass (equal to our Sun’s mass). But I have never seen that unit “M halo” used before, so I’m just guessing it means the finest structure resolvable in their movie still images would be maybe a Sun-sized object, or a solar system sized bunch of stuff. This is dark matter they are visualizing, so the stars and planets we can see just get completely obscured in these simulations (since the star-like matter is less than a few percent of the mass).

True to my word, that’s all I will write for now about this piece of beauty. I need to get my speech back.

*      *       *

Oh, but I do just want to hasten to say the image above I pasted in there is NOTHING compared to the movies of the simulations. You gotta watch the Bolshoi Cosmology movies to see the beauty!

*       *       *


CCL_BY-NC-SA(https://creativecommons.org/licenses/by-nc-sa/4.0/legalcode)

“I’d Like Some Decoherence Sauce with that Please”

OK, last post I was a bit hasty saying Simon Saunders undermined Max Tegmark. Saunders eventually finds his way to recover a theory of probability from his favoured Many Worlds Interpretation. But I do think he over-analyses the theory of probability. Maybe he is under-analysing it too in some ways.

What the head-scratchers seem to want is a Unified Theory of Probability. Something that gives what we intuitively know is a probability but cannot mathematically formalise in a way that deals with all reality. Well, I think this is a bit of a chimera. Sure, I’d like a unified theory too. But sometimes you have to admit reality, even abstract mathematical Platonic reality, does not always present us with a unified framework for everything we can intuit.

What’s more, I think probability theorists have come pretty close to a unified framework for probability. It might seem patchwork, it might merge frequentist ideas with Bayesian ideas, but if you require consistency across domains then apply the patchwork so that on overlaps you have agreement, then I suspect (I cannot be sure) that probability theory as experts understand it today, if fairly comprehensive. Arguing that frequentism should always work is a bit like arguing that Archimedean calculus should always work. Pointing out deficiencies in Bayesian probability does not mean there is no overarching framework for probability, since where Bayesianism does not work probably frequentism, or some other combinatorics, will.

Suppose you even have to deal with a space of transfinite cardinality and there is ignorance about where you are, then I think in the future someone will come up with measures on infinite spaces of various cardinality. They might end up with something that is a bit trivial (all probabilities become 0 or 1 for transfinite measures, perhaps?), but I think someone will do it. All I’m saying is that it is way too early in the history of mathematics to say we need to throw up our hands and appeal to physics and Many Worlds.

*      *       *

That was along intro. I really meant to kick off this post with a few remarks about Max Tegmark’s second lecture at the Oxford conference series on Cosmology and Quantum Foundations. He claims to be a physicist, but puts on a philosophers hat when he claims, “I am only my atoms”. Meaning he believes consciousness arises or emerges merely from some “super-complex processes” in brains.

I like Max Tegmark, he seems like a genuinely nice guy, and is super smart. But here he is plain stupid. (I’m hyperbolising naturally, but I still think it’s dopey what he believes.)

It is one thing to say your totality is your atoms, but quite another to take consciousness as a phenomenon seriously and claim it is just physics. Especially, I think, if your interpretation of quantum reality is the MWI. Why is that? Because MWI has no subjectivity. But, if you are honest, or if you have thought seriously about consciousness at all, and what the human mind is capable of, then without being arrogant or anthropocentric, you have to admit that whatever consciousness is, (and I do not know what it is just let me say, but whatever it is) it is an intrinsically subjective phenomenon.

You can find philosophers who deny this, but most of them are just denying the subjectiveness of consciousness in order to support their pet theory of consciousness (which is often grounded in physics). So those folks have very little credibility. I am not saying consciousness cannot be explained by physics. All I am saying is that if consciousness is explained by physics then our notion of physics needs to expand to include subjective phenomena. No known theories of physics have such ingredients.

It is not like you need a Secret Sauce to explain consciousness. But whatever it is that explains consciousness, it will have subjective sauce in it.

OK, I know I can come up with a MWI rebuff. In a MWI ontology all consistent realities exist due to Everettian branching. So I get behaviour that is arbitrarily complex in some universes. In those universes am I not bound to feel conscious? In other branches of the Everett multiverse I (not me actually, but my doppelgänger really, one who branched from a former “me”) do too many dumb things to be considered consciously sentient in the end, even though up to a point they seemed pretty intelligent.

The problem with this sort of “anything goes” so that in some universe consciousness will arise, is that it is naïve or ignorant. It commits the category error of assuming behaviour equates to inner subjective states. Well, that’s wrong. Maybe in some universes behaviour maps perfectly onto subjective states, and so there is no way to prove the independent reality of subjective phenomenon. But even that is no argument against the irreducibility of consciousness. Because any conscious agent who knows of (at least) their own subjective reality, they will know their universes branch is either not all explained by physics, or physics must admit some sort of subjective phenomenon into it’s ontology.

Future philosophers might describe it as merely a matter of taste, one of definitions. But for me, I like to keep my physics objective. Ergo, for me, consciousness (at least the sort I know I have, I cannot speak for you or Max Tegmark) is subjective, at least in some aspects. It sure manifests in objective physics thanks to my brain and senses, but there is something irreducibly subjective about my sort of consciousness. And that is something objectively real physics cannot fully explain.

What irks me most though, are folks like Tegmark who claim folks like me are arrogant in thinking we have some kind of secret sauce (by this presumably he means a “soul” or “spirit” that guides conscious thought).  I think quite the converse. It is arrogant to think you can get consciousness explained by conventional physics and objective processes in brains. Height of physicalist arrogance really.

For sure, there are people who take the view human beings are special in some way, and a lot of such sentiments arise from religiosity.

But people like me come to the view that consciousness is not special, but it is irreducibly subjective.  We come to this believing in science.   But we also come without prejudices.  So, in my humble view, if consciousness involves only physics you can say it must be some kind of special physics. That’s not human arrogance. Rather, it is an honest assessment of our personal knowledge about consciousness and more importantly about what consciousness allows us to do.

To be even more stark.  When folks like Tegmark wave their hands and claim consciousness is probably just some “super complex brain process”, then I think it is fair to say that they are the ones using implicit secret sauce.  Their secret sauce is of the garden variety atoms and molecules variety of course. You can say, “well, we are ignorant and so we cannot know how consciousness can be explained using just physics”.  And that’s true.  But (a) it does not avoid the problem of subjectivity, and (b) you can be just as ignorant about whether physics is all their is to reality. Over the years I have developed sense that it is far more arrogant to think physical reality is the only reality. I’ve tried to figure out how sentient subjective consciousness, and mathematical insight, and ideal Platonic forms in my mind can be explained by pure physics. I am still ignorant. But I do strongly postulate that there has to be some element of subjective reality involved in at least my form of consciousness. I say that in all sincerity and humility. And I claim it is a lot more humble than the position of philosophers who echo Tegmark’s view on human arrogance.

Thing is, you can argue no one understands consciousness, so no one can be certain what it is, but we can be fairly certain about what it isn’t. What it is not is a purely objectively specifiable process.

A philosophical materialist can then argue that consciousness is an illusion, it is a story the brain replays to itself. I’ve heard such ideas a lot, and they seem to be very popular at preset even though Daniel Dennett and others wrote about them more than 20 years ago. And the roots of the meme “consciousness is an illusion” is probably even centuries older than that, which you can confirm if you scour the literature.

The problem is you can then clearly discern a difference in definitions. The consciousness is an illusion folks use quite a different definition of consciousness compared to more onologically open-minded philosophers.

*      *       *

On to other topics …

*      *       *

Is Decoherence Faster than Light? (… yep, probably)

There is a great sequence in Max Tegmark’s talk where he explains why decoherence of superpositions and entanglement is just about, “the fastest process in nature!” He presents an illustration with a sugar cube dissolving in a cup of coffee. The characteristic times for relevant physical processes go as follows,

  1. Fluctuations — changes in correlations between clusters of molecules.
  2. Dissipation — time for about half the energy added by the sugar to be turned into heat. Scales by roughly the number of molecules in the sugar, so it takes on the order of N collisions on average.
  3. Dynamics — changes in energy.
  4. Information — changes in entropy.
  5. Decoherence — takes only one collision. So about 1025 times faster than dissipation.

(I’m just repeating this with no independent checks, but this seems about right.)

This also gives a nice characterisation of classical versus quantum regimes:

  1. Mostly Classical — when τdeco≪τdyn≤τdyn.
  2. Mostly Quantum — when τdyn≪τdeco, τdiss.

See if you can figure out why this is a good characterisation of regimes?

Here’s a screenshot of Tegmark’s characterisations:

quanta_decoherencetimes_vs_dissipationtime

The explanation is that in a quantum regime you have entanglement and superposition, uncertainty is high, dynamics evolves without any change in information, and hence also with essentially no dissipation. Classically you get a disturbance in the quantum and all coherence is lost almost instantaneously, and yeah, it goes faster than light because with decoherence nothing physical is “going” it is a not a process, rather decoherence refers to a state of possible knowledge, and that can change instantaneously without any signal transfer, at least according to some theories like MWI or Copenhagen.

I should say that in some models decoherence is a physically mediated process, and in such theories it would take a finite time, but it is still fast. Such environmental decoherence is a feature of gravitational collapse theories for example. Also, the ER=EPR mechanism of entanglement would have decoherence mediated by wormhole destruction, which is probably something that can appear to happen instantaneously from the point of view of certain observers. But the actual snapping of a wormhole bridge is not a faster than light process.

I also liked Tegmark’s remark that,

“We realise the reason that big things tend to look classical isn’t because they are big, it’s just because big things tend to be harder to isolate.”

*      *       *

And in case you got the wrong impression earlier, I really do like Tegmark. In his sugar cube in coffee example his faint Swedish accent gives way for a second to a Feynmanesque “cawffee”. It’s funny. Until you here it you don’t realise that very few physicists actually have a Feynman accent. It’s cool Tegmark has a little bit of it, and maybe not surprising as he often cites Feynman as one of his heroes (ah, yeah, what physicist wouldn’t? Well, actually I do know a couple who think Feynman was a terrible influence on physics teaching, believe it or not! They mean well, but are misguided of course! ☻).

*      *       *

The Mind’s Role Play

Next up: Tegmark’s take on explaining the low entropy of our early universe. This is good stuff.

Background: Penrose and Carroll have critiqued Inflationary Big Bang cosmology for not providing an account for why there is an arrow of time, i.e., why did the universe start in an extremely low entropy state.

(I have not seen Carroll’s talk, but I think it is on my playlist. So maybe I’ll write about it later.) But I am familiar with Penrose’s ideas. Penrose takes a fairly conservative position. He takes the Second Law of Thermodynamics seriously. He cannot see how even the Weyl Curvature Hypothesis explains the low entropy Big Bang. (I think WCH is just a description, not an explanation.)

Penrose does have a few ideas abut how to explain things with his Conformal Cyclic Cosmology ideas. I find them hugely appealing. But I will not discuss them here. Just go read his book.

What I want to write about here is Tegmark and his Subject-Object-Environment troika. In particular, why does he need to bring the mind and observation into the picture? I think he could give his talk and get across all the essentials without mentioning the mind.

But here is my problem. I just do not quite understand how Tegmark goes from the correct position on entropy, which is that is is a coarse graining concept, to his observer-measurement dependence. I must be missing something in his chain of reasoning.

So first: entropy is classically a measure of the multiplicity of a system, i.e., how many microstates in an ensemble are compatible with a given macroscopic state. And there is a suitable generalisation to quantum physics given by von Neumann.

If you fine grain enough then most possible states of the universe are unique and so entropy measured on such scales is extremely low. Basically, you only pick up contributions from degenerate states. Classically this entropy never really changes, because classically an observer is irrelevant. Now, substitute for “Observer” the more general “any process that results in decoherence”. Then you get a reason why quantum mechanically entropy can decrease. To whit: in a superposition there are many states compatible with prior history. When a measurement is made (for “measurement” read, “any process resulting in decoherence”) then entropy naturally will decrease on average (except for perhaps some unusual highly atypical cases).

Here’s what I am missing. All that I just said previously is local. Whereas, for the universe as a whole, globally, what is decoherence? It is not defined. and so what is global entropy then? There is no “observer” (read: “measurement process”) that collapses or decohere’s our whole universe. At least none we know of. So it all seems nonsense to talk about entropy on a cosmological scale.

To me, perhaps terribly naïvely, there is a meaning for entropy within a universe in localised sub-systems where observations can in principle be made on the system. “Counting states” to put it crudely. But for the universe (or Multiverse if you prefer) taken as a whole, what meaning is there to the concept of entropy? I would submit there is no meaning to entropy globally. The Second Law triumphs right? I mean, for a closed isolated system you cannot collapse states and get decoherence, at least not from without, so it just evolves unitarily with constant entropy as far as external observers can tell, or if you coarse grain into ensembles then the Second Law emerges, on average, even for unitary time evolution.

Perhaps what Tegmark was on about was that if you have external observer disruptions then entropy reduces (you get information about the state). But does this not globally just increase entropy since globally now the observer’s system is entangled with the previously closed and isolated system. But who ever bothers to compute this global entropy? My guess is it would obey the Second Law. I have no proof, just my guess.

Of course, with such thoughts in my head it was hard to focus on what Tegmark was really saying, but in the end his lecture seems fairly simple. Inflation introduces decoherence and hences lowers quantum mechanical entropy. So if you do not worry about classical entropy, just focus on the quantum states, then apparently inflationary cosmology can “explain” the low entropy Big Bang.

Only, if you ask me, this is no explanation. It is just “yet another” push-back. Because Inflationary cosmology is incomplete, it does not deal with the pre-inflationary universe. In other words, the pre-inflationary universe has to also have some entropy if you are going to be consistent with taking Tegmarks’ side. So however much inflation reduces entropy, you still have the initial pre-inflationary entropy to account for, which now becomes the new “ultimate source” of or arrow of time. Maybe it has helped to push the unexplained entropy a lot higher? But then you get into the realm of, “what is ‘low’ entropy in cosmological terms?” What does it mean to say the unexplained pre-inflationary entropy is high enough to not worry about? I dunno’. Maybe Tegmark is right? Maybe pre-inflation entropy (disorder) is so high by some sort of objectively observer independent measure (is that possible?) that you literally no longer have to fret about the origin of the arrow of time? Maybe inflation just wipes out all disorder and gives us a proverbial blank slate?

But then I do fret about it. Doesn’t Penrose come in at this point and give baby Tegmark a lesson in what inflation can and cannot do to entropy? Good gosh! It’s just about enough confusion to drive one towards the cosmological anthropic principle out of desperation for closure.

So despite Tegmark’s entertaining and informative lecture, I still don’t think anyone other than Penrose has ever given a no-push-back argument for the arrow of time. I guess I’ll have to watch Tegmark’s talk again, or read a paper on it for greater clarity and brevity.


CCL_BY-NC-SA(https://creativecommons.org/licenses/by-nc-sa/4.0/legalcode)

Oil Pilots and Many World Probability

Continuing my ad hoc review of Cosmology and Quantum Foundations, I come to Max Tegmark and Simon Saunders, who were the two main champions of Many Worlds Interpretations present at this conference. But before discussing ideas arising from their talks, I want to mention an addendum to the Hidden Variables and de Broglie-Bohm pilot wave theory that I totally coincidentally came across the night after writing the previous post (“Gaddamit! Where’d You Put My Variables”).

Fluid Dynamics and Oil Droplets Model de Broglie-Bohm Pilot Waves

This is some seriously recent and immature research, but it is fascinating. And really simple to explain so it’s cool. Here’s the link: Fluid Tests Hint at Concrete Quantum Reality.

quanta_OilDropletPilotwaves

Oil droplets surfing ripples on a fluid surface exhibit two-slit interference. Actually not! They follow chaotic trajectories that reproduce interference patterns only statistically, but there is no superposition at all for the oil droplet, only for the wave ripples. Remarkably similar qualitatively to de Broglie-Bohm pilot wave theory.

You delicately place oil droplets on an immiscible fluid surface (water I suppose) and the droplets bounce around creating waves in the fluid surface. Then, lo and behold! Send an oil droplet through a double slit barrier and it goes through one slit right! Shocking! But then hold on to your skull … after traversing the slit the oil droplet then chaotically meanders around surfing on the wave ripples spreading out from the double slit that the oil droplet was actually responsible for generating before it got to the slits.

Do this for many oil droplets and you will see the famous statistical build-up of interference pattern at a distance radius, but here with classical oil droplets that can be observed to smithereens without destroying superposition of the fluid waves, so you get purely classical double slit interference. Just like the de Broglie-Bohm pilot wave theory predicts for the Bohmian mechanics view of quantum mechanics. I say, “jut like” because clearly this is macroscopic in scale and the mechanism of pilot waves is totally different to the quantum regime. Nonetheless, it is a clear condensed matter physics model for pilot wave Bohmian quantum mechanics.

(There is a recent decades trend in condensed matter physics where phenomenon qualitatively similar to quantum mechanics or black hole phenomenology, or even string theory, can be modelled in solid state or condensed matter systems. It’s a fascinating thing. No one really has an explanation for such quasi-universality in physics. I guess, when different systems of underlying equations give similar asymptotic behaviour then you have a chance of observing such universality in disparate and seemingly unrelated physical systems. One example Susskind mentions in his theoretical Minimum lectures is the condensed matter systems that model Majorana fermions. It’s just brilliantly fascinating stuff. I was going to write separate article about this. Maybe later. I’ll just mention that although such condensed matter models have to be taken with a grain of salt, to whatever extent they can recapitulate the physics of quantum systems you have this tantalising possibility of being able to construct low energy desktop experiments that might, might, be able to explore extreme physics such as superstring regimes and black hole phenomenology, only with safe and relatively affordable experiments. I’m no futurist, but as protein biology promises to be the biology of the 21st century, maybe condensed matter physics is poised to take over from particle accelerators as the main physics laboratory for the 1st century? It’d be kinda’ cool wouldn’t it?)

The oil droplet experiments are not a perfect model for Bohmian mechanics since these pilot waves do not carry other quantum degrees of freedom like spin or charge.

Normally I would scoff at this and say, “nice, but so what?” Physics, and science in general, is rife with examples of disparate systems that display similarity or universality. It does not mean the fundamental physics is the same. And in the oil droplet pilot wave experiments we clearly have a hell of a lot of quantum mechanics phenomenology absent.

But I did not scoff at this one.

The awesome thing about this oil droplet interference experiment is that there is a clear mechanism that can recapitulate a lot of the same phenomenology at the Planck scale, and hence offers an intriguing and tantalising alternative explanation for quantum mechanics as an effective theory that emerges from a more fundamental of Plank scale spacetime dynamics (geometrodynamics to borrow the terminology of Wheeler and Misner). Hell, I will not even mention “quantum gravity”, since that’d take me too fa afield, but dropping that phrase in here is entirely appropriate.

The clear Planck scale phenomenology I am speaking of is the model of spacetime as a superfluid. It will support non-dissipative pilot waves, which are therefore nothing less than subatomic gravitational waves of a sort. Given the weakness of gravity you can imagine how fragile are the superpositions of these spacetime or gravitational pilot waves. Not hard to destroy coherent states.

Then, of course, we already have the emerging theory of ER=EPR which explains entanglement using a type of geometrodynamics. If you start to package together everything that you can get out of geometrodynamics then you being to see a jigsaw puzzle filling in that hints maybe the whole gamut of quantum physics phenomenology at the Planck scale can be largely adequately explained using spacetime geometry and topology.

One big gap in geometrodynamics is the phenomenology of particle physics. Gauge symmetries, charges, and the rest. It will take a brave and fortified physicist to tackle all these problems. If you read my blog you will realise I am a total fan of such approaches. Even if they are wrong, I think they are huge fun to contemplate and play with, even if only as mathematical diversions. So I encourage any young mathematically talented physicists to dare to go in to active research on geometrodynamics.

The Many Worlds Guys

So what about Tegmark and Saunders? Well, by this point I kind of exhausted myself today and forgot what I was going to write about. Saunders mentioned something about frequentist probability having serious issues and that Frequentism could not be a philosophical basis for probability theory. I think that’s a bit unfair. Frequentism works in many practical cases. I don’t think it has to be an over-arching theory of probability. It works when it works.

Same in lots of science. Fourier transforms work on periodic signals, and FT’s can compress non-periodic signals too, but not perfectly. Newtonian physics works bloody well in many circumstances, but is not an all-encompassing theory of mechanics. Natural selection works to explain variation and speciation in living systems, but it is not the whole story, it cannot happen without some supporting mechanism like DNA replication and protein synthesis. You cannot explain speciation using Natural selection alone, it’s just not possible, Natural selection is too general and weak to be a full explanatory theory.

It’s funny too. Saunders seems to undermine a lot of what Tegmark was trying to argue in the previous talk at the conference. Tegmark was explicitly using frequentist counting in his arguments that Copenhagen is no better or worse than Many Worlds from a probabilistic perspective. I admit I do not really know what Saunders was on about. If you can engineer a proper measure than you can do probability. I think maybe Tegmark can justify some sort of MWI space measures. Again, I do not really know much about measure theory for MWI space. Maybe it is an open problem and Tegmark is stretching credibility a bit?

*      *       *


CCL_BY-NC-SA(https://creativecommons.org/licenses/by-nc-sa/4.0/legalcode)

var MyStupidStr = “Gadammit! Where’d You Put My Variables!?”;

This WordPress blog keeps morphing from Superheros and SciFi back to philosophy of physics and other topics. So sorry to readers expecting some sort of consistency. This week I’m back with the Oxford University series, Cosmology and Quantum Foundations lectures. Anthony Valentini gives a talk about Hidden Variables in Cosmology.

The basic idea Valentini proposes is that we could be living in a deterministic cosmos, but we are somehow trapped in a region of phase space where quantum indeterminism reigns. In the our present epoch region there are hidden variables but they cannot be observed, not even indirectly, so they have no observable consequences, and so Bell’s Theorem and Kochen-Specker and the rest of the “no-go” theorems associated with quantum logic hold true. Fine, you say, then really you’re saying there effectively are no Hidden Variables (HV) theories that describe our reality? No, says Valetini. The Hidden Variables would be observable if the universe was in a different state, the other phase. How might this happen? And what are the consequences? And is this even remotely plausible?

Last question first: Valentini thinks it is testable using the microwave cosmic background radiation. Which I am highly sceptical about. But more on this later.

cosmol_Valentin_all.dof.have.relaxed

The idea of non-equilibrium Hidden Variable theory in cosmology. The early universe violates the Born Rule and hidden variables are not hidden. But the violent history of the universe has erased all pilot wave details and so now we only see non-local hidden variables which is no different from conventional QM. (Apologies for low res image, it was a screenshot.)

How Does it Work?

How it might have happened is that the universe as a whole might have two (at least, maybe more) sorts of regimes, one of which is highly non-equilibrium, extremely low entropy. In this region or phase the Hidden Variables would be apparent and Bell’s Theorems would be violated. In the other type of phase the universe is in equilibrium, high entropy, and Hidden Variables cannot be detected and Bell’s Theorem’s remain true (for QM). Valentini claims early during the Big Bang the universe may have been in the non-equilibrium phase, and so some remnants of this HV physics should exist in the primordial CMB radiation. But you cannot just say this and get hidden variables to be unhidden. There has to be some plausible mechanism behind the phase transition or the “relaxation” process as Valentini describes it.

The idea being that the truly fundamental physics of our universe is not fully observable because the universe has relaxed from non-equilibrium to equilibrium. The statistics in the equilibrium phase get all messed up and HV’s cannot be seen. (You understand that in the hypothetical non-equilibrium phase the HV’s are no longer hidden, they’d be manifest ordinary variables.)

Further Details from de Broglie-Bohm Pilot Wave Theory

Perhaps the most respectable HV theory is the (more or less original) de Broglie-Bohm pilot wave theory. It treats Schrödinger’s wave function as a real potential in a configuration space which somehow guides particles along deterministic trajectories. Sometimes people postulate Schrödinger time evolution plus an additional pilot wave potential. (I’m a bit vague about it since it’s a long time since I read any pilot wave theory.) But to explain all manner of EPR experiments you have to go to extremes and imagine this putative pilot Wave as really an all-pervading information storage device. It has to guide not only trajectories but also orientations of spin and units of electric charge and so forth, basically any quantity that can get entangled between relativistically isolated systems.

This seems like unnecessary ontology to me. Be that as it may, the Valentini proposal is cute and something worth playing around with I think.

So anyway, Valentini shows that if there is indeed an equilibrium ensemble of states for the universe then details of particle trajectories cannot be observed and so the pilot wave is essentially unobservable, and hence a non-local HV theory applies which is compatible with QM and the Bell inequalities.

It’s a neat idea.

My bet would be that more conventional spacetime physics which uses non-trivial topology can do a better job of explaining non-locality than the pilot wave. In particular, I suspect requiring a pilot wave to carry all relevant information about all observables is just too much ontological baggage. Like a lot of speculative physics thought up to try to solve foundational problems, I think the pilot wave is a nice explanatory construct, but it is still a construct, and I think something still more fundamental and elementary can be found to yield the same physics without so many ad hoc assumptions.

To relate this with very different ideas, what the de Broglie-Bohm pilot wave reminds me of is the inflaton field postulated in inflationary Big Bang models. I think the inflaton is a fictional construct. Yet it’s predictive power has been very successful.   My understanding is that instead of an inflaton field you can use fairly conventional and uncontroversial physics to explain inflationary cosmology, for example the Penrose CCC (Conformal Cyclic Cosmology) idea. This is not popular. But it is conservative physics and requires no new assumptions. As far as I can tell CCC “only” requires a long but finite lifetime for electrons, which should eventually decay by very weak processes.  (If I recall correctly,  in the Standard Model the electron does not decay.)  The Borexino experiment in Italy has measured the lower limit on the electron lifetime as longer than 66,000—yottayears, but currently there is no upper limit.

And for the de Broglie-Bohm pilot wave I think the idea can be replaced by spacetime with non-trivial topology, which again is not very trendy or politically correct physics, but it is conservative and conventional and requires no drastic new assumptions.

What Are the Consequences?

I’m not sure what the consequences of cosmic HV’s are for current physics. The main consequence seems to be an altered understanding of the early universe, but nothing dramatic for our current and future condition. In other words, I do not think there is much use for cosmic HV theory.

Philosophically I think there is some importance, since the truth of cosmic HV’s could fill in a lot of gaps in our civilisations understanding of quantum mechanics. It might not be practically useful, but it would be intellectually very satisfying.

Is Their Any Evidence for these Cosmic HV’s?

According to Valentini, supposing at some time in the early Big Bang there was non-equilibrium, hence classical physics more or less, then there should be classical perturbations frozen in the cosmic microwave radiation background from this period. This is due to a well-known result in astrophysics where perturbations on so-called “super Hubble” length scales tend to be frozen — i.e., they will still exist in the CMB.

Technically what Valentini et al., predict is a low-power anomaly at large angles in the spectrum of the CMB. That’s fine and good, but (contrary to what Valentini might hope) it is not evidence of non-equilibrium quantum mechanics with pilot waves. Why not? Simply because a hell of a lot of other things can account for observed low-power anomalies. Still, it’s not all bad — any such evidence would count as Bayesian inference support for pilot wave theory. Such weak evidence abounds in science, and would not count as a major breakthrough, unfortunately (because who doesn’t enjoy a good breakthrough?) I’m sure researchers like Valentini, in any sciences, in such positions of lacking solid evidence for a theory will admit behind closed doors the desultory status of such evidence, but they do not often advertise it as such.

It seems to me so many things can be “explained” by statistical features in the CMB data. I think a lot of theorist might be conveniently ignoring the uncertainties in the CMB data. You cannot just take this data raw and look for patterns and correlations and then claim they support your pet theory. At a minimum you need to use the uncertainties in the CMB data and allow for the fact that your theory is not truly supported by the CMB when alternatives to your pet theory are also compatible with the CMB.

I cannot prove it, but I suspect a lot of researchers are using the CMB data in this way. That is, they can get the correlations they need to support their favourite theory, but if they include uncertainties then the same data would support no correlations. So you get a null inconclusive result overall. I do not believe in HV theories, but I do sincerely wish Valentini well in his search for hard evidence. Getting good support for non-mainstream theories in physics is damn exciting.

*      *       *

Epilogue — Why HV? Why not MWI? Why not …

At the same conference Max Texmark polls the audience on their favoured interpretations of QM. The very fact people can conduct such polls among smart people is evidence of areal science of scientific anthropology. It’s interesting, right?! The most popular was Undecided=24. Many Worlds=15. Copenhagen=2. Modified dynamics (GRW)=0. Consistent Histories=0. Bohm (HV)=5. Relational=2. Modal=0.

This made me pretty happy. To me, undecidability is the only respectable position one can take at this present juncture in the history of physics. I do understand of course that many physicists are just voting for their favourites. Hardly any would stake their life on the fact that their view is correct. still, it was heart-warming to see so many taking the sane option seriously.

I will sign off for now by noting a similarity between HV and MWI. There’s not really all that much they have in common. But they both ask us to accept some realities well beyond what conservative standard interpretation-free quantum mechanics begs. What I mean by interpretation-free is just minimalism, which in turn is simply whatever modeling you need to actually do quantum mechanics predictions for experiments, that is the minimal stuff you need to explain or account for in any metaphysics interpretations sitting on top of QM. There is, of course, no such interpretation, which is why I can call it interpretation-free. You just go around supposing (or actually not “supposing” but merely “admitting the possibility”) the universe IS this Hilbert space and that our reality IS a cloud of vectors in this space that periodically expands and contracts in consistency with observed measurement data and unitary evolution, so that it all hangs together consistently and a consistent story can be told about the evolution of vectors in this state space that we take as representing our (possibly shared) reality (no need for solipsism).

I will say one nice thing about MWI: it is a clean theory! It requires a hell of a lot more ontology, but in some sense nothing new is added either. The writer who most convinces me I could believe in MWI is David Deutsch. Perhaps logically his ideas are the most coherent. But what holds me back and forces me to be continually agnostic for now (and yes, interpretations of QM debates are a bit quasi-religious, in the bad meaning of religious, not the good) is that I still think people simply have not explored enough normal physics to be able to unequivocally rule out a very ordinary explanation for quantum logic in our universe.

I guess there is something about being human that desires an interpretation more than this minimalism. I am certainly prey to this desire. But I cannot force myself to swallow either HV(Bohm) or MWI. They ask me to accept more ontology than I am prepared to admit into my mind space for now. I do prefer to seek a minimalist leaning theory, but not wholly interpretation-free. Not for the sake of minimalism, but because I think there is some beauty in minimalism akin to the mathematical idea of a Proof from the Book.


CCL_BY-NC-SA(https://creativecommons.org/licenses/by-nc-sa/4.0/legalcode)

Collapsitude of the Physicists

The series of Oxford lectures I am enjoying in my lunch hours is prompting a few blog ideas. The latest is the business of the collapse of the wavefunction. So much has been written about the measurement problem in quantum mechanics that I surely do not need to write a boring introduction to it all. So I will just assume you can jump in cold and use Wikipedia or whatever to warm up when needed.

quanta_decoherance_awyoumademecollapse

There were no simple cartoons capturing the essence of the “collapse of the wavefunction”, so I made up this one.

By the way, the idea behind my little cartoon there is that making a measurement need not catastrophically collapse a system into a definite eigenstate as most textbooks routinely describe.  This (non-total collapse) is depicted as the residual pale pink region which entertains states in phase space that still have finite probability amplitudes.  We never really notice them subsequently because the amplitudes for these regions are too darn small to detect in any feasible future measurements.   Every measurement has finite precision, so you cannot completely use an actual real messy brains and wheetbix and jelly experiments to form a pure state.  Textbooks on QM are like this, they take so many liberties with the reality of an experimental situation that the theoreticians tend to lose touch with reality, especially when indulging in philosophy while calling it physics.

The issue is rife in many lectures I am watching, one is Simon Saunders’ talk on “The Case for Many Worlds“. He poses a sequence of questions for his audience:

  • Why does the collapse of the state happen?
  • When does it happen?
  • To what state does the state collapse?

He presages this by polling his audience on whether they believe the proverbial Schrödinger’s Cat is exclusively either alive or dead before the observer looks inside the diabolical box with the vial of radioactively triggered nerve gas. Some half of the audience believe the Cat was either alive or dead (i.e., not in a superposition). He then asks what about if the box was not an isolated box but a broom cupboard? Not many people change their mind! But the point was that the cupboard is, surely, in no way or form now isolated from the external universe, and surely has enough perturbations to destroy any delicate entangled superposed states.  But I guess some lovers of cats are hard to budge.

Then he asks, “well, what about if the experiment is being done up on a branch of a tree in a forest with no observer anywhere around?” (The clear unspoken implication is to think about the proverbial tree falling …). He cites a quantum information theoretic conference audience 80% of whom believed the Cat would then still not be in an exclusive XOR state, i.e., would still be in a superposition. Which is quite remarkable. Maybe they never heard of the phenomenon of environmental decoherence?

It’s at such times I wonder if Murray Gell-Mann has had an unhealthy influence on how physicists take their philosophy? In much of his popular style writing Gell-Mann has argued for environmental decoherence. The idea though is that there is no collapse, not ever. The universe remains mixed in a superposition of a giant cosmic wavefunctional state. Gell-Mann is not the sole culprit of course, but he’s the head honcho by fame if nothing else. And boy! You don’t want to go up head-to-head arguing against Gell-Mann! You’ll get your ears pulverised by pressure waves of unrelenting egg-headedness.

To be fair and balanced here’s a book cover that looks like it would be a juicy read if you really want to tangle with environmental decoherence as the explanation for classical physics appearances.

quanta_Joos_bookcover_EnvironmentalDecoherance

Looks like a good read. the lead author is Erich Joos.

I just want to warn you, if you ever feel like you are in a superposition of states then there are some medications that can recover classical physics if you find it too nauseous.

You do not have to take wavefunctions literally. They are just computational devices. The mathematical tool used to model quantum mechanics is not the thing itself that we are trying to describe and model. The point is, the idea is, that whatever the universe is, it must be described by a wavefunction or some equivalent modelling that enjoys a superposition of classical states. That’s what makes the world quantum mechanical, there are classical-like states that get all tensored up in a superposition of some form, and whether you choose to describe this mathematically by a wavefunction, or by matrices, or by Dirac bra and ket vectors in a Hilbert space is largely immaterial.

Many Worlds theorists have a fairly similar outlook to the decoherence folks. Although at some root level their interpretations differ, or are even perhaps empirically incompatible in principle (I’m not sure about that?) I think both views have the germ of the idea that there really is no collapse of state space. In environmental decoherence a measurement merely entangles the system with more stuff, and so gazillions of new things are now entangled, and the whole lot only appears to behave more classically as a result. But there is still superposition, only so many of the coefficients in the linear superposition have shrunk to near zero the overall effect is classical-like collapse. Then of course Schrodinger evolution picks up after the measurement is done, and isolation can gradually be reestablished around some experiment, … and so on and so forth.

Here’s my penny take on this. I’ve become a firm proponent of ER=EPR.  So I figure entanglement is as near to wormhole formation as you wanna get. You can take this literally or merely computationally convenient. For the time being I’m a literalist on it, which means I’ll change my mind if evidence mounts contrarily, but I think it is fruitful to take ER=EPR at more or less face value to see where it leads us.

ERrrr … who just collapsed me? You fiend!

I am also favouring something like gravitational collapse ideas. These seem to have a lot of promise, including (and this is a big selling point for me) the possibility of a link with ER=EPR. For one: if entanglement is established via ER bridges, then probably collapse of superposition can be effected by break-up of the wormholes. It seems a no-brainer to me. Technical issues aside. There might be some bugger of mathematical devilishness that renders this all nonsense. But I’m in like with the ideas and the connections.

Ergo I do not subscribe to environmental decoherence and the eternal superposition of the cosmos. Ergo again I do not subscribe to Many Worlds interpretations. Not that physics foundations is about popularity contests. But I think, when/if experimental approaches to these questions become possible I would be wanting to put research money into rigorously testing gravitational collapse and (if you deign to be a bit simplistic) also ER=Superposition, and therefore “NoER=Collapse”.

Well, that’s a smidgen of my thoughts for now on record. I think there are so many vast unexplored riches in fundamental theories and ideas of spacetime and particle physics that we do not yet need to reach out to bizarre outlandish interpretations of quantum mechanics. Bohr was the original sinner here. But pretty much every physicist who has dabbled in metaphysics and sought a valid interpretation of quantum mechanics has collapsed to the siren of the absurd ever since. This includes all those who followed Feynman’s dictum of forget about an interpretation. I think such non-interpretations are just as silly as the others.

Actually I’m not sure why I’ve characterised this as Feynman’s dictum. To be fair he did not say anything so extreme. He just marvelled at nature and warned physicists not to get into the mode of trying to tell nature what to do:

“We are not to tell nature what she’s gotta be. … She’s always got better imagination than we have.”

— R.P. Feynman, in the Sir Douglas Robb Lectures, University of Auckland (1979).

Man, I LOVED those lectures. My high school physics teacher John Hannah exposed our class to Feynman. Those were some of the best days of my life. The opening up of the beauty of the universe to my inner eyes. Here’s another favourite passage from those lectures:

“There’s a kind of saying that you don’t understand its meaning, ‘I don’t believe it. It’s too crazy. I’m not going to accept it.’ … You’ll have to accept it. It’s the way nature works. If you want to know how nature works, we looked at it, carefully. Looking at it, that’s the way it looks. You don’t like it? Go somewhere else, to another universe where the rules are simpler, philosophically more pleasing, more psychologically easy. I can’t help it, okay? If I’m going to tell you honestly what the world looks like to the human beings who have struggled as hard as they can to understand it, I can only tell you what it looks like.”
— R.P. Feynman (b.1918–d.1988).

Feynman actually said, “It’s the woy nature woiks.”

*      *       *


CCL_BY-NC-SA(https://creativecommons.org/licenses/by-nc-sa/4.0/legalcode)

Rovelli’s Roll

In a highly watchable talk in the Oxford University lecture mini-series on Cosmology and quantum Foundations Carlo Rovelli gives a lot of persuasive arguments about why the Many Worlds Interpretation is suspect. But he goes fast and furious sometimes. Sometimes constructing strawman arguments (I do not think anyone seriously thinks just literally interpreting mathematics in a given model of physics leads to necessarily great ontological truths, apart from the likes of characters like Tegmark perhaps) but I think generally even these points are well made and interesting to ponder. Rovelli describes his own current opinion as “Everettian” — which means not a traditional Many Worlds interpretation but rather Relative State interpretation.

relativestateQM_screenshot_Rovelli_lecture_twoobservers

One observer observing another, screenshot from Carlo Rovelli’s lecture.

There are many key slides in his presentation that I thought worthy of mentioning and which inspired this current post of mine.

In another slide Rovelli puts up a couple of threads, one is,

    • “Why don’t we see superpositions?” — what a silly question! Because in textbook QM we do not see the state, we see eigenvalues. We see where is the position o the electron or it’s momentum, never it’s wavefunction.
    • These (facts) are described by the position in phase space in classical physics; and by points in the spectra of elements of the observable algebra in quantum physics.

Which is cool, but then he riles the zen masters by writing:

  • They can be taken as primary elements, and the quantum formalism built up from them.

First, I should point out this is not erroneous. You can build up a theory from elements that are such primitives as “points in the spectra of elements of the observable algebra”.

But I think this is misleading for purists and philosophers of physics. Just because one approach to calculating expectation values works does not make it’s mathematical elements isomorphic in some sense to elements of physical reality. So I think Rovelli un-does some of his good arguments with such statements. (I’m not the expert Rovelli is, I’m just sayin’ ya know …)

You might counter: “Well, if you are not willing to take your theoretical elements of reality direct from the best mathematical model’s primitives, then where are you going to define your ontology (granting you are wishing to construct a realist interpretation)?”

I would concede, “ok, for now, you can have a favoured realist interpretation based on the primitives of your observables algebra.” But I think you are always going to have to admit this will be temporary, only an “effective interpretation” that is current to our present understandings.

My point is that while this makes for great contemporary physics it does not make for good philosophy (love of both knowledge and truth). The reason is blatant. If all you have is a model for computing amplitudes then there is really only a small probability for hoping this is a dead accurate and “True” picture of the real ontology in our universal physics. You can certainly freely pin your hopes on this chance and see where it leads.

I, for one, think that such an abstraction as an “observable algebra” although nice and concrete and clean, is just too abstract to be wisely taken literally as the basis for a realist interpretation. Again, I’m “just sayin’…”.

There are many more good discussion points in Rovelli’s lecture.

The Wavefunction is a Computational Tool

This meme has always gelled with me. You can map a wavefunction over time, for example, you can visualize an atomic electron’s orbital. But at no single moment in time is the electron ever seen to be smeared out over it’s orbital. To me, as a realist, this means the electron is probably not a wave. But it’s temporal behaviour manifests aspects of wave-like properties. Or to be bold: over time the (non-relativistic) constant energy electron’s state is completely coded as a wave. I will admit in future we might find hard evidence that electron’s truly are waves of some weird spacetime foamy medium, not waves in an abstract mathematical space, but I do not think we are there yet, and I think we will not find this to be so. My guess would be electrons are extended topological geons, perhaps a little more gnarly than superstrings, but less “super”. I think more like solitons of spacetime than embedded strings.

The keyword there for philosophy is “coded”. The wave picture, or if you prefer, the Heisenberg state matrix representation, (either the Schrödinger or Heisenberg mathematical tool will do) is a code for the time evolution of the electron. But in no realist sense can it be identified as the electron.  Moreover, if you are willing to accept the Schrödinger and Heisenberg pictures are equivalent then you have a doubled-up ontology.  To me that’s nonsense if you are also a realist.

Believe it or not though, I’ve read books where this is flatly denied and authors have claimed the electron is the wavefunction. I really cannot subscribe to this. It violates the principle of separation of ontology from theory (let me coin that principle if no one has before!). A model is not the thing being modelled, is another way to put it.

On a related aside note: John Wheeler was being very cheeky or highly provocative in suggesting the “It from Bit” meme. It sounds like a great explanatory concept, but it seems (to me) to lack some unknown extra structure needed to motivate sound belief. Wheeler also talked about “equations written on paper cannot bring themselves into existence” (or something to that effect). But I think “It from Bit” is not very far removed from equations writing themselves into a universe.

EPR is Entanglement with the Future?

That’s not quite an accurate way to encapsulate Rovelli’s take on EPR, but I think it captures the flavour. Rovelli is saying that in a Relational QM interpretation you do not worry about non-locality, because from each observers (the proverbial Alice and Bob at each end of an EPR experiment, or non-human apparatus if you prefer to drop the anthropomorphisms) point of view there is a simple measurement, nothing more. The realisation entanglement was happening only occurs later in the future when the two observers get back together and compare data.

I’m not quite with Rovelli fully on this. And I guess this makes me a non-Everettian. There might be something I’m missing about all this, but I think there is something to explain about the two observers from a “Gods eye” view of the universe at the time each makes their measurements. (Whether God exists is irrelevant, this is pure gedankenexperiment.)  If you are God then you witness effects of entanglement in the measurement outcomes of Alice and Bob.

The recent research surrounding the ER=EPR meme seem to give a fairly sound geometric or geometrodynamic interpretation of EPR as a wormhole connection. So I think Rovelli does not need to invoke anything fancy to explain away EPR entanglment. ER=EPR has, I believe, put the matter of the realist interpretation mechanism of entanglement to rest.

No matter how many professors shout out, “do not attempt to make mental mechanical models of QM, they will fail!” I think ER=EPR defies them at least on it’s own ground. (Ironically, Susskind says just such things in his popular Theoretical Minimum lectures, and yet he was one of the original ER=EPR co-authors!)

What About Superposition: Is Superposition=ER?

I am now going beyond what Rovelli was entertaining.

If you can explain entanglement using wormholes, how about superposition?

ERequalsEPR_NingBao_etal_PenroseCompressed

ER=EPR depiction from a  nice article by “Splitting Spacetime” Bao, Pollack, and Remmen (2015) http://inspirehep.net/record/1380145

 

I have not read any good papers about this yet. But I predict someone will put something on the arxiv soon (probably have already since I just haven’t gotten around to searching.) In a hand-waving manner, superpositions are bit like self-entanglement. A slightly harder interpretation might be that at the ends of a wormhole you could get particle duplication or mirror-effects of a sort.

One might even get quite literal and play with the idea that when an electron slips down a minimal wormhole it’s properties get mirrored at each end. Although, “mirror” is not the correct symmetry. I think perhaps just “copied at each end” is better. Cloned at each end? Whatever.

Maybe the electron continually oscillates back and forth between the mouths in some way? Who knows. It does require some kind of traversable ER bridge, or maybe just that when the bridge evaporates in a finite time the electron’s information snaps but to one end, but not both ends. Susskind and Hawking both concur now that there is no black hole information loss right? So surely a little ol’ electron’s information is not going to get lost if it wanders into a minimal ER bridge.

Then measurement or “wave function collapse” is likely a process of collapse of the wormhole. But in snapping the ER bridge the particle property can (somehow) only get restored at one end. Voila! You solve Schrodinger’s Cat’s dilemma.

Oh man! Would I not love t0 write a detailed technical mathematical exposition of all this. Sigh! Someone will probably beat me to it. Meehhh … what do I care, I’m not doing physics for fame or fortune.

Someone will have to eventually worry about stability of minimal ER bridges and the like. Then there are Lorentzian wormholes and closed time-like curves to consider. That Bao, Pollack, Remmen (2015) paper I cited above talks about “no-go” theorems arising from admitting ER bridges, no-go for causality violation and no-go for topology change.  I think what theoretical physics needs is an injecting of going past such no-go theorems.  They have to be “goes”.  Especially topology change.  If topology change implies violation of causality then all the better.  It only needs to have direct consequences at the Planck scale, then it’s not so scary to admit into theory, whatever the mess it might cause for modelling.  The upshot is that at the macroscopic scale I think allowing the “go” for these theorems rather than the “no-go” will reveal a lot of explanatory power, maybe even most of the explanation for the core phenomenon of quantum mechanics.  They mention concerns about violation of causality All of which I think is brilliant. I can see this sort of deep space structure explaining a lot of the current mystery about quantum mechanics, and in a realist interpretation. Awesome! And that I am not “just sayin” — it truly would be justifiably awesome.

*      *       *

Hmmm … had a lot more to say about Rovelli’s talk. Maybe another day.

*      *       *


CCL_BY-NC-SA(https://creativecommons.org/licenses/by-nc-sa/4.0/legalcode)