Greater Thoughts that Cannot Be Imageoned

Most scientists do not enter their chosen fields because the work is easy. They do their science mainly because it is challenging and rewarding when triumphant. Yet few scientists will ever taste the sweet dew drops of triumph — real world-changing success — in their lifetimes. So it is remarkable perhaps that the small delights in science are sustaining enough for the human soul to warrant persistence and hard endeavour in the face of mostly mediocre results and relatively few cutting edge break-throughs.

Still, I like to think that most scientists get a real kick out of re-discovering results that others before them have already uncovered. I do not think there is any diminution for a true scientist in having been late to a discovery and not having publication priority. In fact I believe this to be universally true for people who are drawn into science for aesthetic reasons, people who just want to get good at science for the fun of it and to better appreciate the beauty in this world. If you are of this kind you likely know exactly what I mean. You could tomorrow stumble upon some theorem proven hundreds of years ego by Gauss or Euler or Brahmagupta and still revel in the sweet taste of insight and understanding.

Going even further, I think such moments of true insight are essential in the flowering of scientific aesthetic sensibilities and the instilling of a love for science in young children, or young at heart adults. “So what?” that you make this discovery a few hundred years later than someone else? They had a birth head start on you! The victory is truly still yours. And “so what?” that you have a few extra giants’ shoulders to stand upon? You also saw through the haze and fog of much more information overload and Internet noise and thought-pollution, so you can savour the moment like the genius you are.

Such moments of private discovery go unrecorded and must surely occur many millions of times more frequently than genuinely new discoveries and break-throughs. Nevertheless, every such transient to invisible moment in human history must also be a little boost to the general happiness and welfare of all of humanity. Although only that one person may feel vibrant from their private moment of insight, their radiance surely influences the microcosm of people around them.

I cannot count how many such moments I have had. They are more than I will probably admit, since I cannot easily admit to any! But I think they occur quite a lot, in very small ways. However, back in the mid 1990’s I had, what I thought, was a truly significant glimpse into the infinite. Sadly it had absolutely nothing to do with my PhD research, so I could only write hurriedly rough notes on recycled printout paper during small hours of the morning when sleep eluded my body. To this day I am still dreaming about the ideas I had back then, and still trying to piece something together to publish. But it is not easy. So I will be trying to leak out a bit of what is in my mind in some of these WordPress pages. Likely what will get written will be very sketchy and denuded of technical detail. But I figure if I put the thoughts out into the Web maybe, somehow, some bright young person will catch them via Internet osmosis of a sort, and take them to a higher level.

geons_vs_superstrings_1

There are a lot of threads to knit together, and I hardly know where to start. I have already started writing perhaps half a dozen manuscripts, none finished, most very sketchy. And this current writing is yet another forum I have begun.

The latest bit of reading I was doing gave me a little shove to start this topic anew. It happens from time to time that I return to studying Clifford Geometric Algebra (“GA” for short). The round-about way this happened last week was this:

  • Weary from reading a Complex Analysis book that promised a lot but started to get tedious: so for a light break YouTube search for a physics talk, and find Twistors and Spinors talks by Sir Roger Penrose. (Twistor Theory is heavily based on Complex Analysis so it was a natural search to do after finishing a few chapters of the mathematics book).
  • Find out the Twistor Diagram efforts of Andrew Hodges have influenced Nima Arkani-Hamed and even Ed Witten to obtain new cool results crossing over twistor theory with superstring theory and scattering amplitude calculations (the “Amplituhedron” methods).
  • That stuff is ok to dip into, but it does not really advance my pet project of exploring topological geon theory. So I look for some more light reading and rediscover papers from the Cambridge Geometric Algebra Research Group (Lasenby, Doran, Gull). And start re-reading Gull’s paper on electron paths and tunnelling and the Dirac theory inspired by David Hestene’s work
  • The Gull paper mentions criticisms of the Dirac theory that I had forgotten. In the geometric algebra it is clear that solving the Dirac equation gives not positively charge anti-electrons, but unphysical negative frequency solutions with negative charge and negative mass. So they are not positrons. It’s provoking that the authors claim this problem is not fully resolved by second quantisation, but rather perhaps just gets glossed over? I’m not sure what to think of this. (If the negative frequencies get banished by second quantisation why not just conclude first quantisation is not nature’s real process?)
  • Still, whatever the flaws in Dirac theory, the electron paths paper has tantalising similarities with the Bohm pilot wave theory electron trajectories. And there is also a reference to the Statistical Interpretation of Quantum Mechanics (SIQM) due to Ballentine (and attributed also as Einstein’s preferred interpretation of QM).
  • It gets me thinking again of how GA might be helpful in my problems with topological geons. But I shelve this thought for a bit.
  • Reading Ballentine’s paper is pretty darn interesting. It dates from 1970, but it is super clear and easy to read. I love that in a paper. The gist of it is that an absolute minimalist interpretation of quantum mechanics would drop Copenhagen ideas and view the wave function as more like a description of what could happen in nature, tat is, the wave functions are descriptions of statistical ensembles of identically prepared experiments or systems in nature. (Sure, no two systems are ever prepared in the exact same initial state, but that hardly matters when you are only doing statistics rather than precise deterministic modelling.)
  • So Ballentine was suggesting the wave functions are;
    1. not a complete description of an individual particle, but rather
    2. better thought of as a description of an ensemble of identically prepared states.

This is where I ended up, opening my editor to draft a OneOverEpsilon post.

So here’s the thing I like about the ensemble interpretation and how the geometric algebra reworking of Dirac theory adds to a glimmer of clarity about what might be happening with the deep physics of our universe. For a start the ensemble interpretation is transparently not a complete theoretical framework, since it is a statistical theory it does not pretend to be a theory of reality. Whatever is responsible for the statistical behaviour of quantum systems is still an open question in SIQM. The Bohm-like trajectories that the geometric algebra solutions to the Dirac theory are able to compute as streamline plots are illuminating in this respect, since they seem to clearly show that what the Dirac wave equation is modelling is almost certainly not the behaviour a single particle. (One could guess this from Schrödinger theory as well, but I guess physicists were already lured into believing in the literal wave-particle duality meme well before Bohm was able to influence anyone’s thinking.)

Also, it is possible (I do not really know for sure) that the negative frequency solutions in Dirac theory can be viewed as merely an artifact of the statistical ensemble framework. No single particle acts truly in accordance with the Dirac wave equation. So there is no real reason to get ones pants in a twist about the awful appearance of negative frequencies.

(For those in-the-know: the Dirac theory negative frequency solutions turn out to have particle currents in the reverse spatial direction to their momenta, so that’s not a backwards time propagating anti-particle, it is a forwards in time propagating negative mass particle. That’s a particle that’d fall upwards in a gravitational field if the principle of equivalence holds universally. As an aside note: it is a bit funky that this cannot be tested experimentally since no one can yet clump enough anti-matter together to test which way it accelerates in a gravitational field. But I presume the sign of particle inertial mass can be checked in the lab, and, so far, all massive particles known to science at least are known to have positive inertial mass.)

And as a model of reality the Dirac equation has therefore, certain limitations and flaws. It can get some of the statistics correct for particular experiments, but a statistical model always has limits of applicability. This is neither a defense or a critique of Dirac theory.  My view is that it would be a bit naïve to regard Dirac theory as the theory of electrons, and naïve to think it should have no flaws.  At best such wave-function models are merely a window frame for a particular narrow view out into our universe.  Maybe I am guilty of a bit of sophistry or rhetoric here, but that’s ok for a WordPress blog I think … just puttin’ some ideas “out there”.

Then another interesting confluence is that one of Penrose’s big projects in Twistor theory was to do away with the negative frequency solutions in 2-Spinor theory. And I think, from recall, he succeeded in this some time ago with the extension of twistor space to include the two off-null halves. Now I do not know how this translates into real-valued geometric algebra, but in the papers of Doran, Lasenby and Gull you can find direct translations of twistor objects into geometric algebra over real numbers. So there has to be in there somewhere a translation of Penrose’s development in eliminating the negative frequencies.

So do you feel a new research paper on Dirac theory in the wind just there? Absolutely you should! Please go and write it for me will you? I have my students and daughters’ educations to deal with and do not have the free time to research off-topic too much. So I hope someone picks up on this stuff. Anyway, this is where maybe the GA reworking of Dirac theory can borrow from twistor theory to add a little bit more insight.

There’s another possible confluence with the main unsolved problem in twistor theory. The Twistor theory programme is held back (stalled?) a tad (for 40 years) by the “googly problem” as Penrose whimsically refers to it. The issue is one of trying to find self-dual solutions of Einstein’s vacuum equations (as far as I can tell, I find it hard to fathom twistor theory so I’m not completely sure what the issue is). The “googly problem” stood for 40 years, and in essence is the problem of “finding right-handed interacting massless fields (positive helicity) using the same twistor conventions that give rise to left-handed fields (negative helicity)”. Penrose maybe has a solution dubbed Palatial Twistor Theory which you might be able to read about here: “On the geometry of palatial twistor theory” by Roger Penrose, and also lighter reading here: “Michael Atiya’s Imaginative Mind” by Siobhan Roberts in Quanta Magazine.

If you do not want to read those articles then the synopsis, I think, is that twistor theory has some problematic issues in gravitation theory when it comes to chirality (handedness), which is indeed a problem since obtaining a closer connection between relativity and quantum theory was a prime motive behind the development of twistor theory. So if twistor theory cannot fully handle left and right-handed solutions to Einstein’s equations it might be said to have failed to fulfill one it’s main animating purposes.

So ok, to my mind there might be something the geometric algebra translation of twistor theory can bring to bear on this problem, because general relativity is solved in fairly standard fashion with geometric algebra (that’s because GA is a mathematical framework for doing real space geometry, and handles Lorentzian metrics as simply as Euclidean, not artificially imposed complex analytic structure is required). So if the issues with twistor theory are reworked in geometric algebra then some bright spark should be able to do the job twistor theory was designed do do.

By the way, the great beauty and advantage Penrose sees in twistor theory is the grounding of twistor theory in complex numbers. The Geometric Algebra Research Group have pointed out that this is largely a delusion. It turns out that complex analysis and holomorphic functions are just a sector of full spacetime algebra. Spacetime algebra, and in fact higher dimensional GA, have a concept of monogenic functions which entirely subsume the holomorphic (analytic) functions of 2D complex analysis. Complex numbers are also completely recast for the better as encodings of even sub-algebras of the full Clifford–Geometric Algebra of real space. In other words, by switching languages to geometric algebra the difficulties that arise in twistor theory should (I think) be overcome, or at least clarified.

If you look at the Geometric Algebra Research Group papers you will see how doing quantum mechanics or twistor theory with complex numbers is really a very obscure way to do physics. Using complex analysis and matrix algebra tends to make everything a lot harder to interpret and more obscure. This is because matrix algebra is a type of encoding of geometric algebra, but it is not a favourable encoding, it hides the clear geometric meanings in the expressions of the theory.

*      *       *

So far all I have described is a breezy re-awakening of some old ideas floating around in my head. I rarely get time these days to sit down and hack these ideas into a reasonable shape. But there are more ideas I will try to write down later that are part of a patch-work that I think is worth exploring. It is perhaps sad that over the years I had lost the nerve to work on topological geon theory. Using spacetime topology to account for most of the strange features of quantum mechanics is however still my number one long term goal in life. Whether it will meet with success is hard to discern, perhaps that is telling: if I had more confidence I would simply abandon my current job and dive recklessly head-first into geon theory.

Before I finish up this post I want to thus outline very, very breezily and incompletely, the basic idea I had for topological geon theory. It is fairly simplistic in many ways. There is however new impetus from the past couple of years developments in the Black Hole firewall paradox debates: the key idea from this literature has been the “ER=EPR” correspondence hypothesis, which is that quantum entanglement (EPR) might be almost entirely explained in terms of spacetime wormholes (ER: Einstein-Rosen bridges). This ignited my interest because back in 1995/96 I had the idea that Planck scale wormholes in spacetime can allow all sorts of strange and gnarly advance causation effects on the quantum (Planckian) space and time scales. It seemed clear to me that such “acausal” dynamics could account for a lot of the weird correlations and superpositions seen in quantum physics, and yet fairly simply so by using pure geometry and topology. It was also clear that if advanced causation (backwards time travel or closed timelike curves) are admitted into physics, even if only at the Planck scale, then you cannot have a complete theory of predictive physics. Yet physics would be deterministic and basically like general relativity in the 4D block universe picture, but with particle physics phenomenology accounted for in topological properties of localised regions of spacetime (topological 4-geons). The idea, roughly speaking, is that fundamental particles are non-trivial topological regions of spacetime.  The idea is that geons are not 3D slices of space, but are (hypothetically) fully 4-dimensional creatures of raw spacetime topology.   Particles are not apart from spacetime. Particles are not “fields that live in spacetime”, no! Particles are part of spacetime.  At least that was the initial idea of Geon Theory.

Wave mechanics, or even quantum field theory, are often perceived to be mysterious because they either have to be interpreted as non-deterministic (when one deals with “wave function collapse”) or as semi-deterministic but incomplete and statistical descriptions of fundamental processes.   When physicists trace back where the source of all this mystery lies they are often led to some version of non-locality. And if you take non-locality at face value it does seem rather mysterious given that all the models of fundamental physical processes involve discrete localised particle exchanges (Feynman diagrams or their stringy counterparts).   One is forced to use tricks like sums over histories to obtain numerical calculations that agree with experiments.  But no one understand why such calculational tricks are needed, and it leads to a plethora of strange interpretations, like Many Worlds Theory, Pilot Waves, and so on.   A lot of these mysteries I think dissolve away when the ultimate source of non-locality is found to be deep non-trivial topology in spacetime which admits closed time-like curves (advanced causation, time travel).  To most physicists such ideas appear nonsensical and outrageous.  With good reason of course, it is very hard to make sense of a model of the world which allows time travel, as decades of scifi movies testify!  But geon theory doe snot propose unconstrained advanced causation (information from the future influences events in the past).   On the contrary, geon theory is fundamentally limited in outrageousness by the assumption the closed time-like curves are restricted to something like the Planck scale.   I should add that this is a wide open field of research.  No one has worked out much at all on the limits and applicability of geon theory.    For any brilliant young physicists or mathematicians this is a fantastic open playground to explore.

The only active researcher I know in this field is Mark Hadley. It seemed amazing to me that after publishing his thesis (also around 1994/95 independently of my own musings) no one seemed to take up his ideas and run with them.  Not even Chris Isham who refereed Hadley’s thesis.  The write-up of Hadley’s thesis in NewScientist seemed to barely cause a micro-ripple in the theoretical physics literature.    I am sure sociologists of science could explain why, but to me, at the time, having already discovered the same ideas, I was perplexed.

To date no one has explicitly spelt out how all of quantum mechanics can be derived from geon theory. Although Hadley I surmise, completed 90% of this project!  The final 10% is incredibly difficult though — it would necessitate deriving something like the Standard Model of particle physics from pure 4D spacetime topology — no easy feat when you consider high dimensional string theory has not really managed the same job despite hundreds of geniuses working on it for over 35 years. My thinking has been that string theory involves a whole lot of ad hockery and “code bloat” to borrow a term from computer science! If string theory was recast in terms of topological geons living as part of spacetime, rather than as separate to spacetime, then I suspect great advances could be made. I really hope someone will see these hints and connections and do something momentous with them.  Maybe some maverick like that surfer dude Garett Lisi might be able to weigh in and provide some fire power?

In the mean time  geometric algebra has so not been applied to geon theory, but GA blends in with these ideas since it seems, to me, to be the natural language for geometric physics. If particle phenomenology boils down to spacetime topology, then the spacetime algebra techniques should find exciting applications.  The obstacle is that so far spacetime algebra has only been developed for physics in spaces with trivial topology.

Another connection is with “combinatorial spacetime” models — the collection of ideas for “building up spacetime” from discrete combinatorial structures (spin foams, or causal networks, causal triangulations, and all that stuff). My thinking is that all these methods are unnecessary, but hint at interesting directions where geometry meets particle physics because (I suspect) such combinatorial structure approaches to quantum gravity are really only gross approximations to the spacetime picture of topological geon theory. It is in the algebra which arises from non-trivial spacetime topology and it’s associated homology that (I suspect) combinatorial spacetime pictures derive their use.

Naturally I think the combinatorial structure approaches are not fundamental. I think topology of spacetime is what is fundamental.

*      *       *

That probably covers enough of what I wanted to get off my chest for now. There is a lot more to write, but I need time to investigate these things so that I do not get too speculative and vague and vacuously philosophical.

What haunts me most nights when I try to dream up some new ideas to explore for geon theory (and desperately try to find some puzzles I can actually tackle) is not that someone will arrive at the right ideas before me, but simply that I never will get to understand them before I die. I do not want to be first. I just want to get there myself without knowing how anyone else has got to the new revolutionary insights into spacetime physics. I had the thrill of discovering geon theory by myself, independently of Mark Hadley, but now there has been this long hiatus and I am worried no one will forge the bridges from geon theory to particle physics while I am still alive.

I have this plan for what I will do when/if I do hear such news. It is the same method my brother Greg is using with Game of Thrones. He is on a GoT television and social media blackout until the books come out. He’s a G.R.R. Martin purest you see. But he still wants to watch the TV adaptation later on for amusement (the books are waaayyy better! So he says.) It is surprisingly easy to enforce such a blackout. Sports fans will know how. Any follower of All Black Rugby who misses an AB test match knows the skill of doing a media blackout until they get to watch their recording or replay. It’s impossible to watch an AB game if you know the result ahead of time. Rugby is darned exciting, but a 15-aside game has too many stops and starts to warrant sitting through it all when you already know the result. But when you do not know the result the build-up and tension are terrific. I think US Americans have something similar in their version of Football, since American Football has even more stop/start, it would be excruciatingly boring to sit through it all if you knew the result. But strangely intense when you do not know!

So knowing the result of a sports contest ahead of time is more catastrophic than a movie or book plot spoiler. It would be like that if there is a revolution in fundamental physics involving geon theory ideas. But I know I can do a physics news blackout fairly easily now that I am not lecturing in a physics department. And I am easily enough of an extreme introvert to be able to isolate my mind from the main ideas, all I need is a sniff, and I will then be able to work it all out for myself. It’s not like any ordinary friend of mine is going to be able to explain it to me!

If geon theory turns out to have any basis in reality I think the ideas that crack it all open to the light of truth will be among the few great ideas of my generation (the post Superstring generation) that could be imagined. If there are greater ideas I would be happy to know them in time, but with the bonus of not needing a physics news blackout! If it’s a result I could never have imagined then it’d be worth just savouring the triumph of others.


CCL_BY-NC-SA(https://creativecommons.org/licenses/by-nc-sa/4.0/legalcode)

Bohm and Beability

I write this being of sound mind and judgement … etc., etc., …

At this stage of life a dude like me can enter a debate about the foundations of quantum mechanics with little trepidation. There is a chance someone will put forward proposals that are just too technically difficult to understand, but there is a higher chance of getting either something useful out of the debate or obtaining some amusement and hilarity. The trick is to be a little detached and open-minded while retaining a decent dose of scepticism.

Inescapable Non-locality

Recently I was watching a lecture by Sheldon Goldstein (a venerable statesman of physics) who was speaking about John Stewart Bell’s contributions to the foundations of quantum mechanics. Bell was, like Einstein, sceptical of the conventional interpretations that gave either too big a role for “observers” and the “measurement process” or swept such issues aside by appealing to Many Worlds or some other fanciful untestable hypotheses.

What Bell ended up showing was a theory for a class of experiments that would prove the physics of our universe is fundamental non-local. Bell was actually after experimental verification that we cannot have local hidden variable theories. Hidden variables being things in physics that we cannot observe. Bell hated the idea of unobservable physics (and Einstein would have agreed, (me too, but that’s irrelevant)). The famous “Bell’s Inequalities” are a set of relations referring to experimental results that will give clear different numbers for outcomes of experiments if our universe’s physics is inherently non-local, or classical-with-hidden-variables.  The hidden variables are used to model the weirdness of quantum mechanics.

Hidden variable theories attempt to use classical physics, and possibly strict locality (no signals going faster than light, and even no propagation of information faster than light) to explain fundamental physical processes. David Bohm came up with the most complete ideas for hidden variables theories, but his, and all subsequent attempts, had some very strange features that seemed to be always needed in order to explain the results of the particular types of experiments that John Bell had devised. In Bohm’s theories he uses a feature called a Pilot Wave, which is an information carrying wave that physicists can only indirectly observe via it’s influence on experimental outcomes. We only get to see the statistics and probabilities induced by Bohm’s pilot waves. They spread out everywhere and they thus link space-like separated regions of the universe between which no signals faster than light could ever travel between. This has the character of non-locality but without requiring relativity violating information signalling faster than light, so the hope was one could use pilot waves to get a local hidden variables theory that would agree with experiments.

Goldstein tells us that Bell set out to show it was impossible to have a local hidden variables theory, but he ended up showing you could not have any local theory — at all! — all theories have to have some non-locality. Or rather, what the Bell Inequalities ended up proving (via numerous repeated experiments which measured conformance to the Bell inequalities) was that the physics in our universe could never be local, whatever theory one devises to model reality it has to be non-local. So it has to have some way for information to get from one region to another faster than light.

That is what quantum mechanics assumes, but without giving us any mechanism to explain it. A lot of physicists would just say, “It’s just the way our world is”, or they might use some exotic fanciful physics, like Many Worlds, to try to explain non-locality.

History records that Bell’s theorems were tested in numerous types of experiments, some with photons, some with electrons, some with entire atoms, and all such experiments have confirmed quantum mechanics and non-locality and have dis-proven hidden variables and locality. For the record, one may still believe in hidden variables, but the point is that if even your hidden variables theory has to be non-local then you lose all the motivation for believing in hidden variables. Hidden variables were designed to try to avoid non-locality. That was almost the only reason for postulating hidden variables. Why would you want to build-in to the foundations of a theory something unobservable? Hidden variables were a desperation in this sense, a crazy idea designed to do mainly just one thing — remove non-locality. So Bell and the experiments showed this project has failed.

ER.EPR_JohnStewartBell_CERN_1982

I like this photo of Bell from CERN in 1982 because it shows him at a blackboard that has a Bell Inequality calculation for an EPR type set-up. (Courtesy of : Christine Sutten, CERN https://home.cern/about/updates/2014/11/fifty-years-bells-theorem)

Now would you agree so far?  I hope not.  Hidden variables are not too much more crazy then any of the “standard interpretations” of quantum mechanics, of which there are a few dozen varieties, all fairly epistemologically bizarre.  Most other interpretations have postulates that are considerably more radical than hidden variables postulates. Indeed, one of the favourable things about a non-local hidden variables theory is that it would give the same predications as quantum mechanics but without a terribly bizarre epistemology.  Nevertheless, HV theories have fallen out of favour because people do not like nature to have hidden things that cannot be observed.  This is perhaps an historical prejudice we have inherited from the school of logical positivism, and maybe for that reason we should be more willing to give it up!  But the prejudice is quite persistent.

Quantum Theory without Observers

Goldstein raises some really interesting points when he starts to talk about the role of measurement and the role of observers. He points out that physicists are mistaken when they appeal to observers and some mysterious “measurement process” in their attempts to rectify the interpretations of quantum mechanics. It’s a great point that I have not heard mentioned very often before. According to Goldstein, a good theory of physics should not mention macroscopic entities like observers or measurement apparatus, because such things should be entirely dependent upon—and explained by—fundamental elementary processes.

This demand seems highly agreeable to me. It is a nice general Copernican principle to remove ourselves from the physics needed to explain our universe. And it is only a slightly stronger step to also remove the very vague and indiscreet notion of “measurement”.

The trouble is that in basic quantum mechanics one deals with wave functions or quantum fields (more generally) that fundamentally cannot account for the appearance of our world of experience. The reason is that these tools only give us probabilities for all the various ways things can happen over time, we get probabilities and nothing else from quantum theory. What actually happens in time is not accounted for by just giving the probabilities. This is often a called the “Measurement Problem” of quantum mechanics. It is not truly a problem. It is a fundamental incompleteness. The problem is that standard quantum theory has absolutely no mechanism for explaining the appearance of classical reality that we observe.

So this helps explain why a lot of quantum interpretation philosophy injects the notions of “observer” and “measurement” into the foundations of physics. It seems to be necessary for proving an account of the real semi-classical appearance of our world. We are not all held in ghostly superpositions because we all observe and “measure” each other, constantly. Or maybe our body cells are enough, they are “observing each other” for us? Or maybe a large molecule has “observational power” and is sufficient? Goldstein, correctly IMHO, argues this is all bad philosophy. Our scientific effort should be spent on trying to complete quantum theory or find a better more complete theory or framework for fundamental physics.

Here’s Goldstein encapsulating this:

It’s not that you don’t want observers in physics. Observers are in the real world and physics better account for the fact that there are observers. But observers, and measurement, and vague notions like that, and, not just vague, even macroscopic notions, they just seem not to belong in the very formulation of what could be regarded as a fundamental physical theory.

There should be no axioms about “measurement”. Here is one passage that John Bell wrote about this:

The concept of measurement becomes so fuzzy on reflection that it is quite surprising to have it appearing in physical theory at the most fundamental level. … Does not any analysis of measurement require concepts more fundamental than measurement? And should not the fundamental theory be about these more fundamental concepts?

Rise of the Wormholes

I need to explain one more set of ideas before making the note for this post.

There is so much to write about ER=EPR, and I’ve written a few posts about ER=EPR so far, but not enough. The gist of it, recall, is that the fuss in recent decades over the “Black Hole Information Paradox” or the “Black Hole Firewall” have been incredibly useful in leading a group of theoreticians towards a basic dim inchoate understanding that the non-locality in quantum mechanics is somehow related to wormhole bridges in spacetime.  Juan Maldacena and Leonard Susskind have pioneered this approach to understanding quantum information.

A lot of the weirdness on quantum mechanics turns out to be just geometry and topology of spacetime.

The “EPR”=”Einstein-Podolsky-Rosen-Bohm thought experiments”, precisely the genesis of the ideas that John Bell devised his Bell Inequalities for testing quantum theory, and which prove that physics involves fundamentally non-local interactions.

The “ER=”Einstein-Rosen wormhole bridges”. Wormholes are a science fiction device for time travel or fast interstellar travel. The idea is that you might imagine creating a spacetime wormhole by pinching off a thread of spacetime like the beginnings of a black hole, but then reconnecting the pinched end somewhere else in space, maybe a long time or distance separation away, and keep the pinched end open at this reconnection region.  So you can make this wormhole bridge a space length or time interval short-cut between two perhaps vastly separated regions of spacetime.

It seems that if you have an extremal version of a wormhole that is essentially shrunk down to zero radius, so it cannot be traversed by any mass, then this minimalistic wormhole still acts as a conduit of information. These provide the non-local connections between spacelike separated points in spacetime. Basically the ends of the ER=EPR wormholes are like particles, and they are connected by a wormhole that cannot be traversed by any actual particle.

Entanglement and You

So now we come to the little note I wanted to make.

I agree with Goldstein that we aught not artificially inject the concept of an observer or a “measurement process” into the heart of quantum mechanics. We should avoid such desperations, and instead seek to expand our theory to encompass better explanations of classical appearances in our world.

The interesting thing is that when we imagine how ER=EPR wormholes could influence our universe, by connecting past and future, we might end up with something much more profound than “observers” and “measurements”. We might end up with an understanding of how human consciousness and our psychological sense of the flow of time emerges from fundamental physics. All without needing to inject such transcendent notions into the physics. Leave the physics alone, let it be pristine, but get it correct and then maybe amazing things can emerge.

I do not have such a theory worked out. But I can give you the main idea. After all, I would like someone to be working on this, and I do not have the time or technical ability yet, so I do not want the world of science to wait for me to get my act together.

First: it would not surprise me if, in future, a heck of a lot of quantum theory “weirdness” was explained by ER=EPR like principles. If you abstract a little and step back from any particular instance of “quantum weirdness”, (like wave-particle duality or superposition or entanglement in any particular experiment) then what we really see is that most of the weirdness is due to non-locality. Now, this might take various guises, but if there is one mechanism for non-locality then it is a good bet something like this mechanism is at work behind most instances of non-locality that arise in quantum mechanics.

Secondly: the main way in which ER=EPR wormholes account for non-local effects is via pure information connecting regions of spacetime via the extremal wormholes. And what is interesting about this is that this makes a primitive form of time travel possible. Only information can “time travel” via these wormholes, but that might be enough to explain a lot of quantum mechanics.

Thirdly: although it is unlikely time travel effects can ever propagate up to macroscopic physics, because we just cannot engineer large enough wormholes, the statistical effects of the minimalistic ER+EPR wormholes might be enough to account for enough correlation between past and future that we might be able to eventually prove, in principle, that information gets to us from our future, at least at the level of fundamental quantum processes.

Now here’s the more speculative part: I think what might emerge from such considerations is a renewed description of the old Block Universe concept from Einstein’s general relativity (GR). Recall, in GR, time is more or less placed on an equal theoretical footing to space. This means past and future are all connected and exist whether we know it or not. Our future is “out there in time” and we just have not yet travelled into it. And we cannot travel back to our past because the bridges are not possible, the only wormhole bridges connecting past to future over macroscopic times are those minimal extremal ER=EPR wormholes that provide the universe with quantum entanglement phenomena and non-locality.

So I do not know what the consequences of such developments will be. But I can imagine some possibilities. One is that although we cannot access our future, or travel back to our past, the information from such regions in the Block Universe are tenuously connected to us nonetheless. Such connections are virtually impossible for us to exploit usefully because we could never confirm what we are dealing with until the macroscopic future “arrives” so to speak.  So although we know it is not complete, we will still have to end up using quantum mechanics probability amplitude mathematics to make predictions about physics.  In other words, quantum mechanics models our situation with respect to the world, not the actual state of the world from an atemporal Block Universe perspective.  It’s the same problem with the time travel experiment conducted in 1994 in the laboratory under the supervision of Günter Nimtz, whose lab sent analogue signals encoding Mozart’s 40th Symphony into the future (by a few milliseconds).

For that experiment there are standard explanations using Maxwell’s theory of electromagnetism that show no particles travel faster than light into the future. Nevertheless, Nimtz’s laboratory got a macroscopic recording of bits of information from Mozart’s 40th Symphony out of one back-end of a tunnelling apparatus before it was sent into the front-end of the apparatus. The interesting thing to me is not about violation of special relativity or causality.  (You might think the physicists could violate causality because one of them could wait at the back-end and when they hear Mozart come out they could tell their colleague to send Beethoven instead, thus creating a paradox.  But they could not do this because they could not send a communication fast enough in real time to warn their colleague to send Beethoven’s Fifth instead of Mozart.)  Sadly that aspect of the experiment was the most controversial, but it was not the most interesting thing. Many commentators argued about the claimed violations of SR, and there are some good arguments about photon “group velocity” being able to transmit a signal faster than light without any particular individual photon needing to go faster than light.

(Actually many of Nimtz’s experiments used electron tunnelling, not photon tunnelling, but the general principles are the same.)

All the “wave packet” and “group velocity” explanations of Nimtz’s time travel experiments are, if you ask me, merely attempts to reconcile the observations with special relativity. They all, however, use collective phenomena, either waves, or group packets. But we all know photons are not waves, they are particles (many still debate this, but just bear out my argument). The wave behaviour of fundamental particles is in fact a manifestation of quantum mechanics. Maxwell’s theory is, thus, only phenomenological. It describes electromagnetic waves, and photons get interpreted (unfortunately) as modes of such waves. But this is mistaken. Photons collectively can behave as Maxwell’s waves, but Maxwell’s theory is describing a fictional reality. Maxwell’s theory only approximates what photons actually do. They do not, in Maxwell’s theory, impinge on photon detectors like discrete quanta. And yet we all know this is what light actually does! It violates Maxwell’s theory every day!

So what, I think, is truly interesting about Nimtz’s experiments is that they were sensitive enough to give us a window into wormhole traversal. Quantum tunnelling is nothing more than information traversal though ER=EPR type wormholes. At least that’s my hypothesis. It is a non-classical effect, and Maxwell’s theory only accounts for it via the fiction that photons are waves. A wrong explanation can often fully explain the facts of course!

Letting Things Be

What Goldstein, and Bohm, and later John Stewart Bell wanted to do is explain the world. They knew quantum field theory does not explain the world. It does not tell us why things come to be what they are. Why a measurement pointer ends up pointing in particular direction rather than any one of the other superposed states of pointer orientation the quantum theory tells us it aught to be in.  Such outcomes or predictions are what David Bohm referred to as “local Beables”.  Goldstein explains more in his seminar: John Bell and the Foundations of Quantum Mechanics” Sesto, Italy 2014, (https://www.youtube.com/watch?v=RGbpvKahbSY).

My favourite idea, one I have been entertaining for over twenty years, in fact ever since 1995 when I read Kip Thorne’s book about classical general relativity and wormholes, is that the wormholes (or technically “closed timelike curves”) are where all the ingredients are for explaining quantum mechanics from a classical point of view. Standard twentieth century quantum theory does not admit wormholes. But if you ignore quantum theory and start again from classical dynamics, but allow ER=EPR wormholes to exist, then I think most of quantum mechanics can be recovered without the need for un-explained axiomatic superpositions and wave-function collapse (the conventional explanation for “measurements” and classical appearances). In other words, quantum theory, like Maxwell’s EM theory, is only a convenient fictional model of our physics. You see, when you naturally have information going backwards and forwards in time you cannot avoid superpositions of state. But when a stable time-slice emerges or “crystallizes” out of this mess of acausal dynamics, then it should look like a measurement has occurred. But no such miracle happens, it simply emerges or crystallizes naturally from the atemporal dynamics. (I use the term “crystallize” advisedly here, it is not a literal crystallization, but something abstractly similar, and George Ellis uses it in a slightly different take on the Block Universe concept, so I figure it is a fair term to use).

Also, is it possible that atemporal dynamics will tend to statistically “crystallize” something like Bohm’s pilot wave guide potential.  If you know a little about Bohmian mechanics you know the pilot wave is postulated as a real potential, something that just exists in our universe’s physics.  Yet is has no other model alike, it is not a quantum field, it is not a classical filed, it is what it is.  But what if there is no need for such a postulate?  How could it be avoided?  My idea is that maybe the combined statistical effects of influences propagating forward and backward in time give rise to an effective potential much like the Bohm pilot wave or Schrödinger wave function.  Either way, both constructs in conventional or Bohmian quantum mechanics might be just necessary fictions we need to describe, in one way or another, the proper complete Block Universe atemporal spacetime dynamics induced by the existence of spacetime wormholes.  I could throw around other ideas, but the main one is that wormholes endow spacetime with a really gnarly stringy sort of topology that has, so far, not been explored enough by physicists.

Classically you get non-locality when you allow wormholes. That’s the quickest summary I can give you. So I will end here.

*      *       *


CCL_BY-NC-SA(https://creativecommons.org/licenses/by-nc-sa/4.0/legalcode)

Waking Up to Witten

Do you like driving? I hate it. Driving fast and dangerous in a computer game is ok, but a quick and ephemeral thrill. But for real driving, to and from work, I have a long commute, and no amount of podcasts or music relieves the tiresomeness. Driving around here I need to be on constant alert, there are so many cockroaches (motor scooters) to look out for, and here in Thailand over half the scooter drivers do no t wear helmets, and I cannot drive 50 metres before seeing a young child driven around on a scooter without a helmet. Neither parent nor child will have a helmet. Mothers even cradle infants while hanging on at rear on a scooter. It might not be so bad if the speeds were slow, but they are not. That’s partly why I find driving exhausting. It is stressful to be so worried about so many other people.

Last evening I got home and collapsed and slept for 6 hours. Then woke up and could not get back to sleep, it was midnight. So naturally I got up made a cup of tea, heated up some lasagna and turned on a video of Edward Witten speaking at Strings 2015, What Every Physicist Should Know About String Theory.

Awesome!

True to the title it was illuminating. Watching Witten’s popular lectures is always good value. Mostly I find everything he presents I have heard or read about elsewhere, but never in so much seemingly understandable depth and insight. It is really lovely to hear Witten talk about the φ3 quantum field theory as a natural result of quantising gravity in 1-dimension. He describes this as one of nature’s rhymes: patterns at one scale or domain get repeated in others.

Then he describes how the obstacle to a quantum gravity theory in spacetime via a quantum field theory is the fact that in quantum mechanics states do not correspond to operators. He draws this as a Feynman diagram where a deformation of spacetime is indicated by a kink in a Feynman graph line. That’s an operator. Whereas states in quantum mechanics do not have such deformations, since they are points.

strings_deformations_operators_states

An operator describing a perturbation, like a deformation in the spacetime metric, appears as an internal line in a Feynman diagram, not an external line.

So that’s really nice isn’t it?

I had never heard the flaw of point particle quantum field theory given in such a simple and eloquent way. (The ultraviolet divergences are mentioned later by Witten.)

Then Witten does a similar thing for my understanding of how 2D conformal field theory relates to string theory and quantised gravity. In 2-dimensions there is a correspondence between operators and states in the quantum theory, and it is illustrated schematically by the conformal mapping that takes a point in a 2-manifold to a tube sticking out of the manifold.

strings_deformations_2d_conformal

The point being (excuse the pun) the states are the slices through this conformal geometry, and so deformations of the states are now equivalent to deformations of operators, and we have the correspondence needed for a quantum theory of gravity.

This is all very nice, but 3/4 of the way through his talk it still leaves some mystery to me.

  • I still do not quite grok how this makes string theory background-free. The string world sheet is quantize-able and you get from this either a conformal field theory or quantum gravity, but how is this background-independent quantum gravity?

I find I have to rewind and watch Witten’s talk a number of times to put all the threads together, and I am still missing something. Since I do not have any physicist buddies at my disposal to bug and chat to about this I either have to try physicsforums or stackexchange or something to get some more insight.

So I rewound a few times and I am pretty certain Witten starts out using a Riemannian metric on a string, and then on a worldsheet. Both are already embedded in a spacetime. So he is not really describing quantum gravity in spacetime. He is describing a state-operator correspondence in a quantum gravity performed on string world sheets. Maybe in the end this comes out in the wash as equivalent to quantising general relativity? I cannot tell. In any case, everyone knows string theory yields a graviton. So in some sense you can say, “case closed up to phenomenology”, haha! Still, a lovely talk and a nice pre-bedtime diversion. But I persisted through to the end of the lecture — delayed sleep experiment.

My gut reaction was that Witten is using some slight of hand. The Conformal Field Theory maybe is background-free, since it is derived from quantum mechanics of the string world sheets. But the stringy gravity theory still has the string worldsheet fluffing around in a background spacetime. Does it not? Witten is not clear on this, though I’m sure in his mind he knows what he is talking about. Then, like he read my mind, Witten does give a partial answer to this.

What Witten gets around to saying is that if you go back earlier in his presentation where he starts with a quantum field theory on a 1D line, then on a 2d-manifold, the spacetime he uses, he claims, was arbitrary. So this partially answers my objections. He is using a background spacetime to kick-start the string/CFT theory, which he admits. But then he does the slight-of-hand and says

“what is more fundamental is the 2d conformal field theory that might be described in terms of a spacetime but not necessarily.”

So my take on this is that what Witten is saying is (currently) most fundamental in string theory is the kick-starter 2d conformal field theory. Or the 2d manifold that starts out as the thing you quantise deformations on to get a phenomenological field theory including quantised gravity. But this might not even be the most fundamental structure. You start to get the idea that string/M-theory is going to moprh into a completely abstract model. The strings and membranes will end up not being fundamental. Which is perhaps not too bad.

I am not sure what else you need to start with a conformal field theory. But surely some kind of proto-primordial topological space is needed. Maybe it will eventually connect back to spin foams or spin networks or twistors. Haha! Wouldn’t that be a kick in the guts for string theorists, to find their theory is really built on top of twistor theory! I think twistors give you quite a bit more than a 2d conformal field, but maybe a “bit more” is what is needed to cure a few of the other ills that plague string theory phenomenology.

*      *       *

For what it’s worth, I actually think there is a need in fundamental physics to explain even more fundamental constructs, such as why do we need to start with a Lagrangian and then sum it’s action over all paths (or topologies if you are doing a conformal field theory)? This entire formalism, in my mind, needs some kind of more primitive justification.

Moreover, I think there is a big problem in field theory per se. My view is that spacetime is more fundamental than the fields. Field theory is what should “emerge” from a fundamental theory of spacetime physics, not the other way around. Yet “the other way round”, — i.e., fields first, then spacetime — seems to be what a lot of particle or string theorists seem to be suggesting. I realize this is thoroughly counter to the main stream of thought in modern physics, but I cannot help it, I’m really a bit of a classicist at heart. I do not try to actively swim against the stream, it’s just in this case that’s where I find my compass heading. Nevertheless, Witten’s ideas and the way he elaborates them are pretty insightful. Maybe I am unfair. I have heard Weinberg mention the fields are perhaps not fundamental.

*      *       *

OK, that’s all for now. I have to go and try to tackle Juan Maldacena’s talk now. He is not as easy to listen to though, but since this will be a talk for a general audience it might be comprehensible. Witten might be delightfully nerdy, but Maldacena is thoroughly cerebral and hard to comprehend. Hoping he takes it easy on his audience.

*      *       *


CCL_BY-NC-SA(https://creativecommons.org/licenses/by-nc-sa/4.0/legalcode)

Eternal Rediscovery

I have a post prepared to upload in a bit that will announce a possible hiatus from this WordPress blog. The reason is just that I found a cool book I want to try to absorb, The Princeton Companion to Mathematics by Gowers, Barrow-Green and Leader. Doubtless I will not be able to absorb it all in one go, so I will likely return to blogging periodically. But there is also teaching and research to conduct, so this book will slow me down. The rest of this post is a light weight brain-dump of some things that have been floating around in my head.

Recently, while watching a lecture on topology I was reminded that a huge percentage of the writings of Archimedes were lost in the siege of Alexandria. The Archimedean solids were rediscovered by Johannes Kepler, and we all know what he was capable of! Inspiring Isaac Newton is not a bad epitaph to have for one’s life.

The general point about rediscovery is a beautiful thing. Mathematics, more than other sciences, has this quality whereby a young student can take time to investigate previously established mathematics but then take breaks from it to rediscover theorems for themselves. How many children have rediscovered Pythagoras’ theorem, or the Golden Ratio, or Euler’s Formula, or any number of other simple theorems in mathematics?

Most textbooks rely on this quality. It is also why most “Exercises” in science books are largely theoretical. Even in biology and sociology. They are basically all mathematical, because you cannot expect a child to go out and purchase a laboratory set-up to rediscover experimental results. So much textbook teaching is mathematical for this reason.

I am going to digress momentarily, but will get back to the education theme later in this article.

The entire cosmos itself has sometimes been likened to an eternal rediscovery. The theory of Eternal Inflation postulates that our universe is just one bubble in a near endless ocean of baby and grandparent and all manner of other universes. Although, recently, Alexander Vilenkin and Audrey Mithani found that a wide class of inflationary cosmological models are unstable, meaning that could not have arisen from a pre-existing seed. There had to be a concept of an initial seed. This kind of destroys the “eternal” in eternal inflation. Here’s a Discover magazine account: What Came Before the Big Bang? — Cosmologist Alexander Vilenkin believes the Big Bang wasn’t a one-off event”. Or you can click this link to hear Vilenkin explain his ideas himself: FQXi: Did the Universe Have a Beginning? Vilenkin seems to be having a rather golden period of originality over the past decade or so, I regularly come across his work.

If you like the idea of inflationary cosmology you do not have to worry too much though. You still get the result that infinitely many worlds could bubble out of an initial inflationary seed.

Below is my cartoon rendition of eternal inflation in the realm of human thought:
cosmol_primordial_thoughtcloud_field

Oh to be a bubble thoughtoverse of the Wittenesque variety.

Quantum Fluctuations — Nothing Cannot Fluctuate

One thing I really get a bee in my bonnet about are the endless recountings in the popular literature about the beginning of the universe is the naïve idea that no one needs to explain the origin of the Big Bang and inflatons because “vacuum quantum fluctuations can produce a universe out of nothing”. This sort of pseudo-scientific argument is so annoying. It is a cancerous argument that plagues modern cosmology. And even a smart person like Vilenkin suffers from this disease. Here I quote him from a quote in another article on the PBS NOVA website::

Vilenkin has no problem with the universe having a beginning. “I think it’s possible for the universe to spontaneously appear from nothing in a natural way,” he said. The key there lies again in quantum physics—even nothingness fluctuates, a fact seen with so-called virtual particles that scientists have seen pop in and out of existence, and the birth of the universe may have occurred in a similar manner.
Source: http://www.pbs.org/wgbh/nova/blogs/physics/2012/06/in-the-beginning/

At least you have to credit Vilenkin with the brains to have said it is only “possible”. But even that caveat is fairly weaselly. My contention is that out of nothing you cannot get anything, not even a quantum fluctuation. People seem to forget quantum field theory is a background-dependent theory, it requires a pre-existing spacetime. There is no “natural way” to get a quantum fluctuation out of nothing. I just wish people would stop insisting on this sort of non-explanation for the Big Bang. If you start with not even spacetime then you really cannot get anything, especially not something as loaded with stuff as an inflaton field. So one day in the future I hope we will live in a universe where such stupid arguments are nonexistent nothingness, or maybe only vacuum fluctuations inside the mouths of idiots.

There are other types of fundamental theories, background-free theories, where spacetime is an emergent phenomenon. And proponents of those theories can get kind of proud about having a model inside their theories for a type of eternal inflation. Since their spacetimes are not necessarily pre-existing, they can say they can get quantum fluctuations in the pre-spacetime stuff, which can seed a Big Bang. That would fit with Vilenkin’s ideas, but without the silly illogical need to postulate a fluctuation out of nothingness. But this sort of pseudo-science is even more insidious. Just because they do not start with a presumption of a spacetime does not mean they can posit quantum fluctuations in the structure they start with. I mean they can posit this, but it is still not an explanation for the origins of the universe. They still are using some kind of structure to get things started.

Probably still worse are folks who go around flippantly saying that the laws of physics (the correct ones, when or if we discover them) “will be so compelling they will assert their own existence”. This is basically an argument saying, “This thing here is so beautiful it would be a crime if it did not exist, in fact it must exist since it is so beautiful, if no one had created it then it would have created itself.” There really is nothing different about those two statements. It is so unscientific it makes me sick when I hear such statements touted as scientific philosophy. These ideas go beyond thought mutation and into a realm of lunacy.

I think the cause of these thought cancers is the immature fight in society between science and religion. These are tensions in society that need not exist, yet we all understand why they exist. Because people are idiots. People are idiots where their own beliefs are concerned, by in large, even myself. But you can train yourself to be less of an idiot by studying both sciences and religions and appreciating what each mode of human thought can bring to the benefit of society. These are not competing belief systems. They are compatible. But so many believers in religion are falsely following corrupted teachings, they veer into the domain of science blindly, thinking their beliefs are the trump cards. That is such a wrong and foolish view, because everyone with a fair and balanced mind knows the essence of spirituality is a subjective view-point about the world, one deals with one’s inner consciousness. And so there is no room in such a belief system for imposing one’s own beliefs onto others, and especially not imposing them on an entire domain of objective investigation like science. And, on the other hand, many scientists are irrationally anti-religious and go out of their way to try and show a “God” idea is not needed in philosophy. But in doing so they are also stepping outside their domain of expertise. If there is some kind of omnipotent creator of all things, It certainly could not be comprehended by finite minds. It is also probably not going to be amenable to empirical measurement and analysis. I do not know why so many scientists are so virulently anti-religious. Sure, I can understand why they oppose current religious institutions, we all should, they are mostly thoroughly corrupt. But the pure abstract idea of religion and ethics and spirituality is totally 100% compatible with a scientific worldview. Anyone who thinks otherwise is wrong! (Joke!)

Also, I do not favour inflationary theory for other reasons. There is no good theoretical justification for the inflaton field other than the theory of inflation prediction of the homogeneity and isotropy of the CMB. You’d like a good theory to have more than one trick! You know. Like how gravity explains both the orbits of planets and the way an apple falls to the Earth from a tree. With inflatons you have this quantum field that is theorised to exist for one and only one reason, to explain homogeneity and isotropy in the Big Bang. And don’t forget, the theory of inflation does not explain the reason the Big Bang happened, it does not explain its own existence. If the inflaton had observable consequences in other areas of physics I would be a lot more predisposed to taking it seriously. And to be fair, maybe the inflaton will show up in future experiments. Most fundamental particles and theoretical constructs began life as a one-trick sort of necessity. Most develop to be a touch more universal and will eventually arise in many aspects of physics. So I hope, for the sake of the fans of cosmic inflation, that the inflaton field does have other testable consequences in physics.

In case you think that is an unreasonable criticism, there are precedents for fundamental theories having a kind of mathematically built-in explanation. String theorists, for instance, often appeal to the internal consistency of string theory as a rationale for its claim as a fundamental theory of physics. I do not know if this really flies with mathematicians, but the string physicists seem convinced. In any case, to my knowledge the inflation does not have this sort of quality, it is not a necessary ingredient for explaining observed phenomena in our universe. It does have a massive head start on being a candidate sole explanation for the isotropy and homogeneity of the CMB, but so far that race has not yet been completely run. (Or if it has then I am writing out of ignorance, but … you know … you can forgive me for that.)

Anyway, back to mathematics and education.

You have to love the eternal rediscovery built-in to mathematics. It is what makes mathematics eternally interesting to each generation of students. But as a teacher you have to train the nerdy children to not bother reading everything. Apart from the fact there is too much to read, they should be given the opportunity to read a little then investigate a lot, and try to deduce old results for themselves as if they were fresh seeds and buds on a plant. Giving students a chance to catch old water as if it were fresh dewdrops of rain is a beautiful thing. The mind that sees a problem afresh is blessed, even if the problem has been solved centuries ago. The new mind encountering the ancient problem is potentially rediscovering grains of truth in the cosmos, and is connecting spiritually to past and future intellectual civilisations. And for students of science, the theoretical studies offer exactly the same eternal rediscovery opportunities. Do not deny them a chance to rediscover theory in your science classes. Do not teach them theory. Teach them some theoretical underpinnings, but then let them explore before giving the game away.
With so much emphasis these days on educational accountability and standardised tests there is a danger of not giving children these opportunities to learn and discover things for themselves. I recently heard an Intelligence2 “Intelligence Squared” debate on academic testing. One crazy women from the UK government was arguing that testing, testing, and more testing — “relentless testing” were her words — was vital and necessary and provably increased student achievement.

Yes, practising tests will improve test scores, but it is not the only way to improve test scores. And relentless testing will improve student gains in all manner of mindless jobs out there is society that are drill-like and amount to going through routine work, like tests. But there is less evidence that relentless testing improves imagination and creativity.

Let’s face it though. Some jobs and areas of life require mindlessly repetitive tasks. Even computer programming has modes where for hours the normally creative programmer will be doing repetitive but possibly intellectually demanding chores. So we should not agitate and jump up and down wildly proclaiming tests and exams are evil. (I have done that in the past.)

Yet I am far more inclined towards the educational philosophy of the likes of Sir Ken Robinson, Neil Postman, and Alfie Kohn.

My current attitude towards tests and exams is the following:

  1. Tests are incredibly useful for me with large class sizes (120+ students), because I get a good overview of how effective the course is for most students, as well as a good look at the tails. Here I am using the fact test scores (for well designed tests) do correlate well with student academic aptitudes.
  2. My use of tests is mostly formative, not summative. Tests give me a valuable way of improving the course resources and learning styles.
  3. Tests and exams suck as tools for assessing students because they do not assess everything there is to know about a student’s learning. Tests and exams correlate well with academic aptitudes, but not well with other soft skills.
  4. Grading in general is a bad practise. Students know when they have done well or not. They do not need to be told. At schools if parents want to know they should learn to ask their children how school is going, and students should be trained to be honest, since life tends to work out better that way.
  5. Relentless testing is deleterious to the less academically gifted students. There is a long tail in academic aptitude, and the students in this tail will often benefit from a kinder and more caring mode of learning. You do not have to be soft and woolly about this, it is a hard core educational psychology result: if you want the best for all students you need to treat them all as individuals. For some tests are great, terrific! For others tests and exams are positively harmful. You want to try and figure out who is who, at least if you are lucky to have small class sizes.
  6. For large class sizes, like at a university, do still treat all students individually. You can easily do this by offering a buffet of learning resources and modes. Do not, whatever you do, provide a single-mode style of lecture+homework+exam course. That is ancient technology, medieval. You have the Internet, use it! Gather vast numbers of resources of all different manners of approach to your subject you are teaching, then do not teach it! Let your students find their own way through all the material. This will slow down a lot of students — the ones who have been indoctrinated and trained to do only what they are told — but if you persist and insist they navigate your course themselves then they should learn deeper as a result.

Solving the “do what I am told” problem is in fact the very first job of an educator in my opinion. (For a long time I suffered from lack of a good teacher in this regard myself. I wanted to please, so I did what I was told, it seemed simple enough. But … Oh crap, … the day I found out this was holding me back, I was furious. I was about 18 at the time. Still hopelessly naïve and ill-informed about real learning.) If you achieve nothing else with a student, transitioning them from being an unquestioning sponge (or oily duck — take your pick) to being self-motivated and self-directed in their learning is the most valuable lesson you can ever give them. So give them it.

So I use a lot of tests. But not for grading. For grading I rely more on student journal portfolios. All the weekly homework sets are quizzes though, so you could criticise the fact I still use these for grading. As a percentage though, the Journals are more heavily weighted (usually 40% of the course grade). There are some downsides to all this.

  • It is fairly well established in research that grading using journals or subjective criteria is prone to bias. So unless you anonymise student work, you have a bias you need to deal with somehow before handing out final grades.
  • Grading weekly journals, even anonymously, takes a lot of time, about 15 to 20 times the hours that grading summative exams takes. So that’s a huge time commitment. So you have to use it wisely by giving very good quality early feedback to students on their journals.
  • I still haven’t found out how to test the methods easily. I would like to know quantitatively how much more effective journal portfolios are compared to exam based assessments. I am not a specialist education researcher, and I research and write a about a lot of other things, so this is taking me time to get around to answering.

I have not solved the grading problem, for now it is required by the university, so legally I have to assign grades. One subversive thing I am following up on is to refuse to submit singular grades. As a person with a physicists world-view I believe strongly in the role of sound measurement practice, and we all know a single letter grade is not a fair reflection on a student’s attainment. At a minimum a spread of grades should be given to each student, or better, a three-point summary, LQ, Median, UQ. Numerical scaled grades can then be converted into a fairer letter grade range. And GPA scores can also be given as a central measure and a spread measure.

I can imagine many students will have a large to moderate assessment spread, and so it is important to give them this measure, one in a few hundred students might statistically get very low grades by pure chance, when their potential is a lot higher. I am currently looking into research on this.

OK, so in summary: even though institutions require a lot of tests you can go around the tests and still given students a fair grade while not sacrificing the true learning opportunities that come from the principle of eternal rediscovery. Eternal rediscovery is such an important idea that I want to write an academic paper about it and present at a few conferences to get people thinking about the idea. No one will disagree with it. Some may want to refine and adjust the ideas. Some may want concrete realizations and examples. The real question is, will they go away and truly inculcate it into their teaching practices?

CCL_BY-NC-SA(https://creativecommons.org/licenses/by-nc-sa/4.0/legalcode)

*      *       *

Coupling to the Universe — or “You Are You Because You Are You”

Carlo Rovelli can sure talk up a blizzard (I’m reviewing his conference talk: (The preferred time direction in the dynamics of the full universe). For an Italian native he can really weave a blinding spell in English.

He has my confused when he tries to explain the apparent low entropy Big Bang cosmology. He uses his own brand of relational quantum mechanics I think, but it comes out sounding a bit circular or anthropomorphic. Yet earlier in his lectures he often takes pains to deny anthropomorphic views.

So it is quite perplexing when he tries to explain our perception of an arrow of time by claiming that, “it is what makes us us.” Let me quote him, so you can see for yourself. He starts out by claiming the universe starts in a low entropy state only form our relative point of view. Entropy is an observer dependent concept. It depends on how you coarse grain your physics. OK, I buy that. We couple to the physical external fields in a particular way, and this is what determines how we perceive or coarse grain our slices of the universe. So how we couple to the universe supposedly explains way wee see the apparent entropy we perceive. If by some miracle we coupled more like antiparticles effectively travelling in the reverse time direction then we’d see entropy quite differently, one imagines. So anyway, Rovelli then summarizes:

[On slides: Entropy increase (passage of time) depend on the coarse graining, hence the subsystem, not the microstate of the world.] … “Those depend on the way we couple to the rest of the universe. Why do we couple to the rest of the universe in this way? Because if we didn’t couple to the rest of the universe this way we wouldn’t be us. Us as things, as biological entities that very much live in time coupled in a manner such that the past moves towards the future in a precise sense … which sense? … the one described by the Second Law of Thermodynamics.”

You see what I mean?

Maybe I am unfairly pulling this out of a rushed conference presentation, and to be more balanced and fair I should read his paper instead. If I have time I will. But I think a good idea deserves a clear presentation, not a rush job with a lot of vague wishy-washy babble, or obscuring in a blizzard of words and jargon.

OK, so here’s an abstract from an arxiv paper where Rovelli states things in written English:

” Phenomenological arrows of time can be traced to a past low-entropy state. Does this imply the universe was in an improbable state in the past? I suggest a different possibility: past low-entropy depends on the coarse-graining implicit in our definition of entropy. This, in turn depends on our physical coupling to the rest of the world. I conjecture that any generic motion of a sufficiently rich system satisfies the second law of thermodynamics, in either direction of time, for some choice of macroscopic observables. The low entropy of the past could then be due to the way we couple to the universe (a way needed for us doing what we do), hence to our natural macroscopic variables, rather than to a strange past microstate of the world at large.”

That’s a little more precise, but still no clearer on import. He is still really just giving an anthropocentric argument.

I’ve always thought science was at it’s best when removing the human from the picture. The problem for our universe should not be framed as one of “why do we see an arrow of time?” because, as Rovelli points out, for complex biological systems like ourselves there really is no other alternative. If we did not perceive an arrow of time we would be defined out of existence!

The problem for our universe should be simply, “why did our universe begin (from any arbitrary sentient observer’s point of view) with such low entropy?”

But even that version has the whiff of observer about it. Also, you just define the “beginning” as the end that has the low entropy, then you are done, no debate. So I think there is a more crystalline version of what cosmology should be seeking an explanation for, which is simply, “how can any universe ever get started (from either end of a singularity) in a low entropy state?”

But even there you have a notion of time, which we should remove, since “start” is not a proper concept unless one already is talking about a universe. So the barest question of all perhaps, (at least the barest that I can summon) is, “how do physics universes come to exist?”

This does not even explicitly mention thermodynamics or an arrow of time. But within the question those concepts are embedded. One needs to carefully define “physics” and “physics universes”. But once that is done then you have a slightly better philosophy of physics project.

More hard core physicists however will never stoop to tackle such a question. They will tend to drift towards something where a universe is already posited to exist and has had a Big Bang, and then they will fret and worry about how it could have a low entropy singularity.

It is then tempting to take the cosmic Darwinist route. But although I love the idea, it is another one of those insidious memes that is so alluring but in the cold dead hours of night, when the vampires of popular physics come to devour your life blood seeking converts, seems totally unsatisfying and anaemic. The Many Worlds Interpretation has it’s fangs sunk into a similar vein, which I’ve written about before.

cosmo_OnceUponATimeInRovellisUniverse

*      *       *

Going back to Rovelli’s project, I have this problem for him to ponder. What if there is no way for any life, not even in principle, to couple to the universe other than via the way we humans do, through interaction with strings (or whatever they are) via Hamiltonians and mass-energy? If this is true, and I suspect it is, then is not Rovelli’s “solution” to the low entropy Big Bang a bit meaningless?

I have a pithy way of summarising my critique of Rovelli. I would just point out:

The low entropy past is not caused by us. We are the consequence.

So I think it is a little weak for Rovelli to conjecture that the low entropy past is “due to the way we couple to the universe.” It’s like saying, “I conjecture that before death one has to be born.” Well, … duuuuhhh!

The reason my photo is no longer on Facebook is due to the way I coupled to my camera.

I am an X-gener due to the way my parents coupled to the universe.

You see what I’m getting at? I might be over-reaching into excessive sarcasm, but my point is just that none of this is good science. They are not explanations. It is just story-telling. Still, Rovelli does give an entertaining story if you are a physics geek.

So I had a read of Rovelli’s paper and saw the more precise statement of his conjecture:

Rovelli’s Conjecture: “Any generic microscopic motion of a sufficiently rich system satisfies the second law (in either time direction) for a suitable choice of macroscopic observables.

That’s the sort of conjecture that says nothing. The problem is the “sufficiently rich” clause together with the “suitable choice” clause. You can generate screeds of conjectures with such a pair of clauses. The conjecture only has “teeth” if you define what you mean by “sufficiently rich” and if a “suitable choice” can be identified or motivated as plausible. Because otherwise you are not saying anything useful. For example, “Any sufficiently large molecule will be heavier than a suitably chosen bowling ball.”

*      *       *

Rovelli does provide a toy example to illustrate his notions in classical mechanics. He has yellow balls and red balls. The yellow balls have an attractor which gives them a natural second law of thermodynamic arrow of time. The same box also has red balls with a different attractor which gives them the opposite arrow of time according to the second law. (Watching the conference video for this is better than reading the arxiv paper.) But “so what?”

Rovelli has constructed a toy universe that has entities that would experience opposite time directions if they were conscious. But there are so many things wrong with this example it cannot be seriously considered as a bulwark for Rovelli’s grander project. For starters, what is the nature of his Red and Yellow attractors? If they are going to act complicated enough to imbue the toy universe with anything resembling conscious life then the question of how the arrow of time arises is not answered, it just gets pushed back to the properties of these mysterious Yellow and Red attractors.

And if you have only such a toy universe without any noticeable observers then what is the point of discussing an arrow of time? It is only a concept that a mind external to that world can contemplate. So I do not see the relevance of Rovelli’s toy model for our much more complicated universe which has internal minds that perceive time.

You could say, in principle the toy model tells us there could be conscious observers in our universe who are experiencing life but in the reverse time direction to ourselves, they remember our future but not our past, we remember their future but not their past. Such dual time life forms would find it incredibly hard to communicate, due to this opposite wiring of memory.

But I would argue that Rovelli’s model does not motivate such a possibility, for the same reason as before. Constructing explicit models of different categories of billiard balls each obeying a second law of thermodynamics in opposite time directions in the same system is one thing, but not much can be inferred from this unless you add in a whole lot of further assumptions about what Life is, metabolism, self-replication, and all that. But if you do this the toy model becomes a lot less toy-like and in fact terribly hard to explicitly construct. Maybe Stephen Wolfram’s cellular automata can do the trick? But I doubt it.

I should stop harping on this. Let me just record my profound dissatisfaction with Rovelli’s attempt to demystify the arrow of time.

*      *       *

If you ask me, we are not at a sufficiently mature enough juncture in the history of cosmology and physics to be able to provide a suitable explanation for the arrow of time.

So I have Smith’s Conjecture:

At any sufficiently advanced enough juncture in the history of science, enough knowledge will have accumulated to enable physicists to provide a suitable explanation for the arrow of time.

Facetiousness aside, I really do think that trying to explain the low entropy big bang is a bit premature. It would be much better to be patient and wait for more information about our universe before attempting to launch into the arrow of time project. The reason I believe so is because I think the ultimate answers about such cosmological questions are external to our observable universe.

But even whether they are external or internal there is a wider problem to do with the nature of time and our universe. We do not know if our universe actually had a beginning, a true genesis, or whether it has always existed.

If the universe had a beginning then the arrow of time problem is the usually low entropy puzzle problem. But if the universe had no beginning then the arrow of time problem becomes a totally different question. There is even a kind of intermediate problem that occurs if our universe had a start but within some sort of wider meta-cosmos. Then the problem is much harder, that of figuring out the laws of this putative metaverse. Imagine the hair-pulling of cosmologists who discover this latter possibility as a fact about their universe (but I would envy them the shear ability to discover the fact, it’d be amazing).

So until we know such a fundamental question I do not see a lot of fruitfulness in pursuing the arrow of time puzzle. It’s a counting your chickens before they hatch situation. Or should I say, counting your microstates before they batch.

*      *       *


CCL_BY-NC-SA(https://creativecommons.org/licenses/by-nc-sa/4.0/legalcode)

A Plain Simple Lecture — Non-ergodic , but … satisfying

There is another talk from the Philosophy of Cosmology Conference in Tenerife 2014 that is in a similar league to Joel Primack’s awesome display of the Bolshoi Simulations of dark matter structure. Only this one I will write about tonight is pretty much words and equations only. No pretty pictures. But don’t let that dissuade you from enjoying the talk by Bob Wald on Gravity and Thermodynamics.

Most physics students might only know Robert Wald from his famous textbook on General Relativity.

Aside: While searching for a nice picture to illuminate this post I came across a nice freehand SVG sketch of Shaun Maguire’s. He’s a postdoc at Caltech and writes nicely in a blog there: Quantum Frontiers. If you are more a physics/math geek than a philosophy/physics geek then you will enjoy his blog. I found it very readable, not stunning poetic prose, but easy-going and sufficiently high on technical content to hold my interest.

blackhole_thermodynamics_RindlerQuest2

Says Maguire, “I’ve been trying to understand why the picture on the left is correct, even though my intuition said the middle picture should be (intuition should never be trusted when thinking about quantum gravity.)” Source: http://quantumfrontiers.com/2014/06/20/ten-reasons-why-black-holes-exist/

That has to do with black hole firewalls, which digresses away from Wald’s talk.

It is not true to say Wald’s talk is plain and simple, since the topic is advanced, only a second course on general relativity would cover the details. And you need to get through a lot of mathematical physics in a first course of general relativity. But what I mean is that Wald is such a knowledgeable and clear thinker that he explains everything crisply and understandably, like a classic old-school teacher would. It is not flashy, but damn! It is tremendously satisfying and enjoyable to listen to. I could hit the pause button and read his slides then rewind and listen to his explanation and it just goes together so sweetly. He neither repeats his slides verbatim, not deviates from them confusingly. However, I think if I were in the audience I would be begging for a few pauses of silence to read the slides. So the advantage is definitely with the at-home Internet viewer.

Now if you are still reading this post you should be ashamed! Why did you not go and download the talk and watch it?

I loved Wald’s lucid discussion of the Generalised Second Law (which is basically a redefinition of entropy, which is that generalised entropy should be the sum of thermodyanmics entropy plus black hole entropy or black hole surface area.)

Then he gives a few clear arguments that provide strong reasons for regarding the black hole area formula as equivalent to an entropy, one of which is that in general relativity dynamic instability is equivalent to thermodynamic instability, hence the link between the dynamic process of black hole area increase is directly connected to black hole entropy. (This is in classical general relativity.)

But then he puts the case that the origin of black hole entropy is not perfectly clear, because black hole entropy does not arise out of the usual ergodicity in statistical mechanics systems, whereby a system in an initial special state relaxes via statistical processes towards thermal equilibrium. Black holes are non-ergodic. They are fairly simple beasts that evolve deterministically. “The entropy for a black hole arises because it has a future horizon but no past horizon,” is how Wald explains it. In other words, black holes do not really “equilibrate” like classical statistical mechanics gases. Or at least, they do not equilibrate to a thermal temperature ergodically like a gas, they equilibrate dynamically and deterministically.

Wald’s take on this is that, maybe, in a quantum gravity theory, the detailed microscopic features of gravity (foamy spacetime?) will imply some kind of ergodic process underlying the dynamical evolution of black holes, which will then heal the analogy with statistical mechanics gas entropy.

This is a bit mysterious to me. I get the idea, but I do not see why it is a problem. Entropy arises in statistical mechanics, but you do not need statistically ergodic processes to define entropy. So I did not see why Wald is worried about the different equilibration processes viz. black holes versus classical gases. They are just different ways of defining an entropy and a Second Law, and it seems quite natural to me that they therefore might arise from qualitatively different processes.

But hold onto you hats. Wald next throws me a real curve ball.

Smaller then the Planck Scale … What?

Wald’s next concern about a breakdown of the analogy between statistical gas entropy and dynamic black hole entropy is a doozie. He worries about the fact the vacuum fluctuations in a conventional quantum field theory are basically ignored in statistical mechanics, yet they cannot (or should not?) be ignored in general relativity, since, for instance, the ultra-ultra-high energy vacuum fluctuations in the early universe get red-shifted by the expansion of the universe into observable features we can now measure.

Wald is talking here about fluctuations on a scale smaller than the Planck length!

To someone with my limited education you begin by thinking, “Oh, that’s ok, we all know (one says knowingly not really knowing) that stuff beyond the Plank scale is not very clearly defined and has this sort of ‘all bets are off’ quality about it. So we do not need to worry about it yet until there is a theory covering the Planck scale.”

But if I understand it correctly, what Wald is saying is that what we see in the cosmic background radiation, or maybe in some other observations (Wald is not clear on this), corresponds to such red shifted modes, so we literally might be seeing fluctuations that were originated on a scale smaller than the Planck length if we probe the cosmic background radiation to highly ultra-red shifted wavelengths.

That was a bit of an eye-opener for me. I was previously not aware of any physics that potentially probed beyond the Planck scale. I wonder if anyone else thought this is surprising? Maybe if I updated my physics education I’d find out that it is not so surprising.

In any case, Wald does not discuss this, since his point is about the black hole case where at the black hole horizon a similar shifting of modes occurs with ultra-high energy vacuum fluctuations near the horizon getting red shifted far from the black hole into “real” observable degrees of freedom.

Wald talks about this as a kind of “creation of new degrees of freedom”. And of course this does not occur in statistical gas mechanics where there are a fixed number of degrees of freedom, so again the analogy he wants between black hole thermodynamics and classical statistical mechanics seems to break down.

There is some cool questioning going on here though. The main problem with the vacuum fluctuations Wald points out is that one does not know how to count the states in the vacuum. So the implicit idea there, which Wald does not mention, is that maybe there is a way to count states of the vacuum, which might then heal the thermodynamics analogy Wald is pursuing. My own (highly philosophical, and therefore probably madly wrong) speculation would be that quantum field theory is only an effective theory, and that a more fundamental theory of physics with spacetime as the only real field and particle physics states counted in a background-free theory kind of way, might, might yield some way of calculating vacuum states.

Certainly, I would imagine that if field theory is not the ultimate theory, then the whole idea of vacuum field fluctuations gets called into suspicion. The whole notion of a zero-point background field vacuum energy becomes pretty dubious altogether if you no longer have a field theory as the fundamental framework for physics. But of course I am just barking into the wind hoping to see a beautiful background-free framework for physics.

Like the previous conundrum of ergodicity and equilibration, I do not see why this degree of freedom issue is a big problem. It is a qualitative difference which breaks the strong analogy, but so what? Why is that a pressing problem? Black holes are black holes, gases are gases, they ought to be qualitatively distinct in their respective thermodynamics. The fact there is the strong analogy revealed by Bekenstein, Hawking, Carter, and others is beautiful and does reveal general universality properties, but I do not see it as an area of physics where a complete unification is either necessary or desired.

What I do think would be awesome, and super-interesting, would be to understand the universality better. This would be to ask further (firstly) why there is a strong analogy, and (secondly) explain why and how it breaks down.

*      *       *

This post was interrupted by an apartment moving operation, so I ran out of steam on my consciousness stream, so will wrap it up here.

*      *       *


CCL_BY-NC-SA(https://creativecommons.org/licenses/by-nc-sa/4.0/legalcode)

“I’d Like Some Decoherence Sauce with that Please”

OK, last post I was a bit hasty saying Simon Saunders undermined Max Tegmark. Saunders eventually finds his way to recover a theory of probability from his favoured Many Worlds Interpretation. But I do think he over-analyses the theory of probability. Maybe he is under-analysing it too in some ways.

What the head-scratchers seem to want is a Unified Theory of Probability. Something that gives what we intuitively know is a probability but cannot mathematically formalise in a way that deals with all reality. Well, I think this is a bit of a chimera. Sure, I’d like a unified theory too. But sometimes you have to admit reality, even abstract mathematical Platonic reality, does not always present us with a unified framework for everything we can intuit.

What’s more, I think probability theorists have come pretty close to a unified framework for probability. It might seem patchwork, it might merge frequentist ideas with Bayesian ideas, but if you require consistency across domains then apply the patchwork so that on overlaps you have agreement, then I suspect (I cannot be sure) that probability theory as experts understand it today, if fairly comprehensive. Arguing that frequentism should always work is a bit like arguing that Archimedean calculus should always work. Pointing out deficiencies in Bayesian probability does not mean there is no overarching framework for probability, since where Bayesianism does not work probably frequentism, or some other combinatorics, will.

Suppose you even have to deal with a space of transfinite cardinality and there is ignorance about where you are, then I think in the future someone will come up with measures on infinite spaces of various cardinality. They might end up with something that is a bit trivial (all probabilities become 0 or 1 for transfinite measures, perhaps?), but I think someone will do it. All I’m saying is that it is way too early in the history of mathematics to say we need to throw up our hands and appeal to physics and Many Worlds.

*      *       *

That was along intro. I really meant to kick off this post with a few remarks about Max Tegmark’s second lecture at the Oxford conference series on Cosmology and Quantum Foundations. He claims to be a physicist, but puts on a philosophers hat when he claims, “I am only my atoms”. Meaning he believes consciousness arises or emerges merely from some “super-complex processes” in brains.

I like Max Tegmark, he seems like a genuinely nice guy, and is super smart. But here he is plain stupid. (I’m hyperbolising naturally, but I still think it’s dopey what he believes.)

It is one thing to say your totality is your atoms, but quite another to take consciousness as a phenomenon seriously and claim it is just physics. Especially, I think, if your interpretation of quantum reality is the MWI. Why is that? Because MWI has no subjectivity. But, if you are honest, or if you have thought seriously about consciousness at all, and what the human mind is capable of, then without being arrogant or anthropocentric, you have to admit that whatever consciousness is, (and I do not know what it is just let me say, but whatever it is) it is an intrinsically subjective phenomenon.

You can find philosophers who deny this, but most of them are just denying the subjectiveness of consciousness in order to support their pet theory of consciousness (which is often grounded in physics). So those folks have very little credibility. I am not saying consciousness cannot be explained by physics. All I am saying is that if consciousness is explained by physics then our notion of physics needs to expand to include subjective phenomena. No known theories of physics have such ingredients.

It is not like you need a Secret Sauce to explain consciousness. But whatever it is that explains consciousness, it will have subjective sauce in it.

OK, I know I can come up with a MWI rebuff. In a MWI ontology all consistent realities exist due to Everettian branching. So I get behaviour that is arbitrarily complex in some universes. In those universes am I not bound to feel conscious? In other branches of the Everett multiverse I (not me actually, but my doppelgänger really, one who branched from a former “me”) do too many dumb things to be considered consciously sentient in the end, even though up to a point they seemed pretty intelligent.

The problem with this sort of “anything goes” so that in some universe consciousness will arise, is that it is naïve or ignorant. It commits the category error of assuming behaviour equates to inner subjective states. Well, that’s wrong. Maybe in some universes behaviour maps perfectly onto subjective states, and so there is no way to prove the independent reality of subjective phenomenon. But even that is no argument against the irreducibility of consciousness. Because any conscious agent who knows of (at least) their own subjective reality, they will know their universes branch is either not all explained by physics, or physics must admit some sort of subjective phenomenon into it’s ontology.

Future philosophers might describe it as merely a matter of taste, one of definitions. But for me, I like to keep my physics objective. Ergo, for me, consciousness (at least the sort I know I have, I cannot speak for you or Max Tegmark) is subjective, at least in some aspects. It sure manifests in objective physics thanks to my brain and senses, but there is something irreducibly subjective about my sort of consciousness. And that is something objectively real physics cannot fully explain.

What irks me most though, are folks like Tegmark who claim folks like me are arrogant in thinking we have some kind of secret sauce (by this presumably he means a “soul” or “spirit” that guides conscious thought).  I think quite the converse. It is arrogant to think you can get consciousness explained by conventional physics and objective processes in brains. Height of physicalist arrogance really.

For sure, there are people who take the view human beings are special in some way, and a lot of such sentiments arise from religiosity.

But people like me come to the view that consciousness is not special, but it is irreducibly subjective.  We come to this believing in science.   But we also come without prejudices.  So, in my humble view, if consciousness involves only physics you can say it must be some kind of special physics. That’s not human arrogance. Rather, it is an honest assessment of our personal knowledge about consciousness and more importantly about what consciousness allows us to do.

To be even more stark.  When folks like Tegmark wave their hands and claim consciousness is probably just some “super complex brain process”, then I think it is fair to say that they are the ones using implicit secret sauce.  Their secret sauce is of the garden variety atoms and molecules variety of course. You can say, “well, we are ignorant and so we cannot know how consciousness can be explained using just physics”.  And that’s true.  But (a) it does not avoid the problem of subjectivity, and (b) you can be just as ignorant about whether physics is all their is to reality. Over the years I have developed sense that it is far more arrogant to think physical reality is the only reality. I’ve tried to figure out how sentient subjective consciousness, and mathematical insight, and ideal Platonic forms in my mind can be explained by pure physics. I am still ignorant. But I do strongly postulate that there has to be some element of subjective reality involved in at least my form of consciousness. I say that in all sincerity and humility. And I claim it is a lot more humble than the position of philosophers who echo Tegmark’s view on human arrogance.

Thing is, you can argue no one understands consciousness, so no one can be certain what it is, but we can be fairly certain about what it isn’t. What it is not is a purely objectively specifiable process.

A philosophical materialist can then argue that consciousness is an illusion, it is a story the brain replays to itself. I’ve heard such ideas a lot, and they seem to be very popular at preset even though Daniel Dennett and others wrote about them more than 20 years ago. And the roots of the meme “consciousness is an illusion” is probably even centuries older than that, which you can confirm if you scour the literature.

The problem is you can then clearly discern a difference in definitions. The consciousness is an illusion folks use quite a different definition of consciousness compared to more onologically open-minded philosophers.

*      *       *

On to other topics …

*      *       *

Is Decoherence Faster than Light? (… yep, probably)

There is a great sequence in Max Tegmark’s talk where he explains why decoherence of superpositions and entanglement is just about, “the fastest process in nature!” He presents an illustration with a sugar cube dissolving in a cup of coffee. The characteristic times for relevant physical processes go as follows,

  1. Fluctuations — changes in correlations between clusters of molecules.
  2. Dissipation — time for about half the energy added by the sugar to be turned into heat. Scales by roughly the number of molecules in the sugar, so it takes on the order of N collisions on average.
  3. Dynamics — changes in energy.
  4. Information — changes in entropy.
  5. Decoherence — takes only one collision. So about 1025 times faster than dissipation.

(I’m just repeating this with no independent checks, but this seems about right.)

This also gives a nice characterisation of classical versus quantum regimes:

  1. Mostly Classical — when τdeco≪τdyn≤τdyn.
  2. Mostly Quantum — when τdyn≪τdeco, τdiss.

See if you can figure out why this is a good characterisation of regimes?

Here’s a screenshot of Tegmark’s characterisations:

quanta_decoherencetimes_vs_dissipationtime

The explanation is that in a quantum regime you have entanglement and superposition, uncertainty is high, dynamics evolves without any change in information, and hence also with essentially no dissipation. Classically you get a disturbance in the quantum and all coherence is lost almost instantaneously, and yeah, it goes faster than light because with decoherence nothing physical is “going” it is a not a process, rather decoherence refers to a state of possible knowledge, and that can change instantaneously without any signal transfer, at least according to some theories like MWI or Copenhagen.

I should say that in some models decoherence is a physically mediated process, and in such theories it would take a finite time, but it is still fast. Such environmental decoherence is a feature of gravitational collapse theories for example. Also, the ER=EPR mechanism of entanglement would have decoherence mediated by wormhole destruction, which is probably something that can appear to happen instantaneously from the point of view of certain observers. But the actual snapping of a wormhole bridge is not a faster than light process.

I also liked Tegmark’s remark that,

“We realise the reason that big things tend to look classical isn’t because they are big, it’s just because big things tend to be harder to isolate.”

*      *       *

And in case you got the wrong impression earlier, I really do like Tegmark. In his sugar cube in coffee example his faint Swedish accent gives way for a second to a Feynmanesque “cawffee”. It’s funny. Until you here it you don’t realise that very few physicists actually have a Feynman accent. It’s cool Tegmark has a little bit of it, and maybe not surprising as he often cites Feynman as one of his heroes (ah, yeah, what physicist wouldn’t? Well, actually I do know a couple who think Feynman was a terrible influence on physics teaching, believe it or not! They mean well, but are misguided of course! ☻).

*      *       *

The Mind’s Role Play

Next up: Tegmark’s take on explaining the low entropy of our early universe. This is good stuff.

Background: Penrose and Carroll have critiqued Inflationary Big Bang cosmology for not providing an account for why there is an arrow of time, i.e., why did the universe start in an extremely low entropy state.

(I have not seen Carroll’s talk, but I think it is on my playlist. So maybe I’ll write about it later.) But I am familiar with Penrose’s ideas. Penrose takes a fairly conservative position. He takes the Second Law of Thermodynamics seriously. He cannot see how even the Weyl Curvature Hypothesis explains the low entropy Big Bang. (I think WCH is just a description, not an explanation.)

Penrose does have a few ideas abut how to explain things with his Conformal Cyclic Cosmology ideas. I find them hugely appealing. But I will not discuss them here. Just go read his book.

What I want to write about here is Tegmark and his Subject-Object-Environment troika. In particular, why does he need to bring the mind and observation into the picture? I think he could give his talk and get across all the essentials without mentioning the mind.

But here is my problem. I just do not quite understand how Tegmark goes from the correct position on entropy, which is that is is a coarse graining concept, to his observer-measurement dependence. I must be missing something in his chain of reasoning.

So first: entropy is classically a measure of the multiplicity of a system, i.e., how many microstates in an ensemble are compatible with a given macroscopic state. And there is a suitable generalisation to quantum physics given by von Neumann.

If you fine grain enough then most possible states of the universe are unique and so entropy measured on such scales is extremely low. Basically, you only pick up contributions from degenerate states. Classically this entropy never really changes, because classically an observer is irrelevant. Now, substitute for “Observer” the more general “any process that results in decoherence”. Then you get a reason why quantum mechanically entropy can decrease. To whit: in a superposition there are many states compatible with prior history. When a measurement is made (for “measurement” read, “any process resulting in decoherence”) then entropy naturally will decrease on average (except for perhaps some unusual highly atypical cases).

Here’s what I am missing. All that I just said previously is local. Whereas, for the universe as a whole, globally, what is decoherence? It is not defined. and so what is global entropy then? There is no “observer” (read: “measurement process”) that collapses or decohere’s our whole universe. At least none we know of. So it all seems nonsense to talk about entropy on a cosmological scale.

To me, perhaps terribly naïvely, there is a meaning for entropy within a universe in localised sub-systems where observations can in principle be made on the system. “Counting states” to put it crudely. But for the universe (or Multiverse if you prefer) taken as a whole, what meaning is there to the concept of entropy? I would submit there is no meaning to entropy globally. The Second Law triumphs right? I mean, for a closed isolated system you cannot collapse states and get decoherence, at least not from without, so it just evolves unitarily with constant entropy as far as external observers can tell, or if you coarse grain into ensembles then the Second Law emerges, on average, even for unitary time evolution.

Perhaps what Tegmark was on about was that if you have external observer disruptions then entropy reduces (you get information about the state). But does this not globally just increase entropy since globally now the observer’s system is entangled with the previously closed and isolated system. But who ever bothers to compute this global entropy? My guess is it would obey the Second Law. I have no proof, just my guess.

Of course, with such thoughts in my head it was hard to focus on what Tegmark was really saying, but in the end his lecture seems fairly simple. Inflation introduces decoherence and hences lowers quantum mechanical entropy. So if you do not worry about classical entropy, just focus on the quantum states, then apparently inflationary cosmology can “explain” the low entropy Big Bang.

Only, if you ask me, this is no explanation. It is just “yet another” push-back. Because Inflationary cosmology is incomplete, it does not deal with the pre-inflationary universe. In other words, the pre-inflationary universe has to also have some entropy if you are going to be consistent with taking Tegmarks’ side. So however much inflation reduces entropy, you still have the initial pre-inflationary entropy to account for, which now becomes the new “ultimate source” of or arrow of time. Maybe it has helped to push the unexplained entropy a lot higher? But then you get into the realm of, “what is ‘low’ entropy in cosmological terms?” What does it mean to say the unexplained pre-inflationary entropy is high enough to not worry about? I dunno’. Maybe Tegmark is right? Maybe pre-inflation entropy (disorder) is so high by some sort of objectively observer independent measure (is that possible?) that you literally no longer have to fret about the origin of the arrow of time? Maybe inflation just wipes out all disorder and gives us a proverbial blank slate?

But then I do fret about it. Doesn’t Penrose come in at this point and give baby Tegmark a lesson in what inflation can and cannot do to entropy? Good gosh! It’s just about enough confusion to drive one towards the cosmological anthropic principle out of desperation for closure.

So despite Tegmark’s entertaining and informative lecture, I still don’t think anyone other than Penrose has ever given a no-push-back argument for the arrow of time. I guess I’ll have to watch Tegmark’s talk again, or read a paper on it for greater clarity and brevity.


CCL_BY-NC-SA(https://creativecommons.org/licenses/by-nc-sa/4.0/legalcode)

var MyStupidStr = “Gadammit! Where’d You Put My Variables!?”;

This WordPress blog keeps morphing from Superheros and SciFi back to philosophy of physics and other topics. So sorry to readers expecting some sort of consistency. This week I’m back with the Oxford University series, Cosmology and Quantum Foundations lectures. Anthony Valentini gives a talk about Hidden Variables in Cosmology.

The basic idea Valentini proposes is that we could be living in a deterministic cosmos, but we are somehow trapped in a region of phase space where quantum indeterminism reigns. In the our present epoch region there are hidden variables but they cannot be observed, not even indirectly, so they have no observable consequences, and so Bell’s Theorem and Kochen-Specker and the rest of the “no-go” theorems associated with quantum logic hold true. Fine, you say, then really you’re saying there effectively are no Hidden Variables (HV) theories that describe our reality? No, says Valetini. The Hidden Variables would be observable if the universe was in a different state, the other phase. How might this happen? And what are the consequences? And is this even remotely plausible?

Last question first: Valentini thinks it is testable using the microwave cosmic background radiation. Which I am highly sceptical about. But more on this later.

cosmol_Valentin_all.dof.have.relaxed

The idea of non-equilibrium Hidden Variable theory in cosmology. The early universe violates the Born Rule and hidden variables are not hidden. But the violent history of the universe has erased all pilot wave details and so now we only see non-local hidden variables which is no different from conventional QM. (Apologies for low res image, it was a screenshot.)

How Does it Work?

How it might have happened is that the universe as a whole might have two (at least, maybe more) sorts of regimes, one of which is highly non-equilibrium, extremely low entropy. In this region or phase the Hidden Variables would be apparent and Bell’s Theorems would be violated. In the other type of phase the universe is in equilibrium, high entropy, and Hidden Variables cannot be detected and Bell’s Theorem’s remain true (for QM). Valentini claims early during the Big Bang the universe may have been in the non-equilibrium phase, and so some remnants of this HV physics should exist in the primordial CMB radiation. But you cannot just say this and get hidden variables to be unhidden. There has to be some plausible mechanism behind the phase transition or the “relaxation” process as Valentini describes it.

The idea being that the truly fundamental physics of our universe is not fully observable because the universe has relaxed from non-equilibrium to equilibrium. The statistics in the equilibrium phase get all messed up and HV’s cannot be seen. (You understand that in the hypothetical non-equilibrium phase the HV’s are no longer hidden, they’d be manifest ordinary variables.)

Further Details from de Broglie-Bohm Pilot Wave Theory

Perhaps the most respectable HV theory is the (more or less original) de Broglie-Bohm pilot wave theory. It treats Schrödinger’s wave function as a real potential in a configuration space which somehow guides particles along deterministic trajectories. Sometimes people postulate Schrödinger time evolution plus an additional pilot wave potential. (I’m a bit vague about it since it’s a long time since I read any pilot wave theory.) But to explain all manner of EPR experiments you have to go to extremes and imagine this putative pilot Wave as really an all-pervading information storage device. It has to guide not only trajectories but also orientations of spin and units of electric charge and so forth, basically any quantity that can get entangled between relativistically isolated systems.

This seems like unnecessary ontology to me. Be that as it may, the Valentini proposal is cute and something worth playing around with I think.

So anyway, Valentini shows that if there is indeed an equilibrium ensemble of states for the universe then details of particle trajectories cannot be observed and so the pilot wave is essentially unobservable, and hence a non-local HV theory applies which is compatible with QM and the Bell inequalities.

It’s a neat idea.

My bet would be that more conventional spacetime physics which uses non-trivial topology can do a better job of explaining non-locality than the pilot wave. In particular, I suspect requiring a pilot wave to carry all relevant information about all observables is just too much ontological baggage. Like a lot of speculative physics thought up to try to solve foundational problems, I think the pilot wave is a nice explanatory construct, but it is still a construct, and I think something still more fundamental and elementary can be found to yield the same physics without so many ad hoc assumptions.

To relate this with very different ideas, what the de Broglie-Bohm pilot wave reminds me of is the inflaton field postulated in inflationary Big Bang models. I think the inflaton is a fictional construct. Yet it’s predictive power has been very successful.   My understanding is that instead of an inflaton field you can use fairly conventional and uncontroversial physics to explain inflationary cosmology, for example the Penrose CCC (Conformal Cyclic Cosmology) idea. This is not popular. But it is conservative physics and requires no new assumptions. As far as I can tell CCC “only” requires a long but finite lifetime for electrons, which should eventually decay by very weak processes.  (If I recall correctly,  in the Standard Model the electron does not decay.)  The Borexino experiment in Italy has measured the lower limit on the electron lifetime as longer than 66,000—yottayears, but currently there is no upper limit.

And for the de Broglie-Bohm pilot wave I think the idea can be replaced by spacetime with non-trivial topology, which again is not very trendy or politically correct physics, but it is conservative and conventional and requires no drastic new assumptions.

What Are the Consequences?

I’m not sure what the consequences of cosmic HV’s are for current physics. The main consequence seems to be an altered understanding of the early universe, but nothing dramatic for our current and future condition. In other words, I do not think there is much use for cosmic HV theory.

Philosophically I think there is some importance, since the truth of cosmic HV’s could fill in a lot of gaps in our civilisations understanding of quantum mechanics. It might not be practically useful, but it would be intellectually very satisfying.

Is Their Any Evidence for these Cosmic HV’s?

According to Valentini, supposing at some time in the early Big Bang there was non-equilibrium, hence classical physics more or less, then there should be classical perturbations frozen in the cosmic microwave radiation background from this period. This is due to a well-known result in astrophysics where perturbations on so-called “super Hubble” length scales tend to be frozen — i.e., they will still exist in the CMB.

Technically what Valentini et al., predict is a low-power anomaly at large angles in the spectrum of the CMB. That’s fine and good, but (contrary to what Valentini might hope) it is not evidence of non-equilibrium quantum mechanics with pilot waves. Why not? Simply because a hell of a lot of other things can account for observed low-power anomalies. Still, it’s not all bad — any such evidence would count as Bayesian inference support for pilot wave theory. Such weak evidence abounds in science, and would not count as a major breakthrough, unfortunately (because who doesn’t enjoy a good breakthrough?) I’m sure researchers like Valentini, in any sciences, in such positions of lacking solid evidence for a theory will admit behind closed doors the desultory status of such evidence, but they do not often advertise it as such.

It seems to me so many things can be “explained” by statistical features in the CMB data. I think a lot of theorist might be conveniently ignoring the uncertainties in the CMB data. You cannot just take this data raw and look for patterns and correlations and then claim they support your pet theory. At a minimum you need to use the uncertainties in the CMB data and allow for the fact that your theory is not truly supported by the CMB when alternatives to your pet theory are also compatible with the CMB.

I cannot prove it, but I suspect a lot of researchers are using the CMB data in this way. That is, they can get the correlations they need to support their favourite theory, but if they include uncertainties then the same data would support no correlations. So you get a null inconclusive result overall. I do not believe in HV theories, but I do sincerely wish Valentini well in his search for hard evidence. Getting good support for non-mainstream theories in physics is damn exciting.

*      *       *

Epilogue — Why HV? Why not MWI? Why not …

At the same conference Max Texmark polls the audience on their favoured interpretations of QM. The very fact people can conduct such polls among smart people is evidence of areal science of scientific anthropology. It’s interesting, right?! The most popular was Undecided=24. Many Worlds=15. Copenhagen=2. Modified dynamics (GRW)=0. Consistent Histories=0. Bohm (HV)=5. Relational=2. Modal=0.

This made me pretty happy. To me, undecidability is the only respectable position one can take at this present juncture in the history of physics. I do understand of course that many physicists are just voting for their favourites. Hardly any would stake their life on the fact that their view is correct. still, it was heart-warming to see so many taking the sane option seriously.

I will sign off for now by noting a similarity between HV and MWI. There’s not really all that much they have in common. But they both ask us to accept some realities well beyond what conservative standard interpretation-free quantum mechanics begs. What I mean by interpretation-free is just minimalism, which in turn is simply whatever modeling you need to actually do quantum mechanics predictions for experiments, that is the minimal stuff you need to explain or account for in any metaphysics interpretations sitting on top of QM. There is, of course, no such interpretation, which is why I can call it interpretation-free. You just go around supposing (or actually not “supposing” but merely “admitting the possibility”) the universe IS this Hilbert space and that our reality IS a cloud of vectors in this space that periodically expands and contracts in consistency with observed measurement data and unitary evolution, so that it all hangs together consistently and a consistent story can be told about the evolution of vectors in this state space that we take as representing our (possibly shared) reality (no need for solipsism).

I will say one nice thing about MWI: it is a clean theory! It requires a hell of a lot more ontology, but in some sense nothing new is added either. The writer who most convinces me I could believe in MWI is David Deutsch. Perhaps logically his ideas are the most coherent. But what holds me back and forces me to be continually agnostic for now (and yes, interpretations of QM debates are a bit quasi-religious, in the bad meaning of religious, not the good) is that I still think people simply have not explored enough normal physics to be able to unequivocally rule out a very ordinary explanation for quantum logic in our universe.

I guess there is something about being human that desires an interpretation more than this minimalism. I am certainly prey to this desire. But I cannot force myself to swallow either HV(Bohm) or MWI. They ask me to accept more ontology than I am prepared to admit into my mind space for now. I do prefer to seek a minimalist leaning theory, but not wholly interpretation-free. Not for the sake of minimalism, but because I think there is some beauty in minimalism akin to the mathematical idea of a Proof from the Book.


CCL_BY-NC-SA(https://creativecommons.org/licenses/by-nc-sa/4.0/legalcode)

Collapsitude of the Physicists

The series of Oxford lectures I am enjoying in my lunch hours is prompting a few blog ideas. The latest is the business of the collapse of the wavefunction. So much has been written about the measurement problem in quantum mechanics that I surely do not need to write a boring introduction to it all. So I will just assume you can jump in cold and use Wikipedia or whatever to warm up when needed.

quanta_decoherance_awyoumademecollapse

There were no simple cartoons capturing the essence of the “collapse of the wavefunction”, so I made up this one.

By the way, the idea behind my little cartoon there is that making a measurement need not catastrophically collapse a system into a definite eigenstate as most textbooks routinely describe.  This (non-total collapse) is depicted as the residual pale pink region which entertains states in phase space that still have finite probability amplitudes.  We never really notice them subsequently because the amplitudes for these regions are too darn small to detect in any feasible future measurements.   Every measurement has finite precision, so you cannot completely use an actual real messy brains and wheetbix and jelly experiments to form a pure state.  Textbooks on QM are like this, they take so many liberties with the reality of an experimental situation that the theoreticians tend to lose touch with reality, especially when indulging in philosophy while calling it physics.

The issue is rife in many lectures I am watching, one is Simon Saunders’ talk on “The Case for Many Worlds“. He poses a sequence of questions for his audience:

  • Why does the collapse of the state happen?
  • When does it happen?
  • To what state does the state collapse?

He presages this by polling his audience on whether they believe the proverbial Schrödinger’s Cat is exclusively either alive or dead before the observer looks inside the diabolical box with the vial of radioactively triggered nerve gas. Some half of the audience believe the Cat was either alive or dead (i.e., not in a superposition). He then asks what about if the box was not an isolated box but a broom cupboard? Not many people change their mind! But the point was that the cupboard is, surely, in no way or form now isolated from the external universe, and surely has enough perturbations to destroy any delicate entangled superposed states.  But I guess some lovers of cats are hard to budge.

Then he asks, “well, what about if the experiment is being done up on a branch of a tree in a forest with no observer anywhere around?” (The clear unspoken implication is to think about the proverbial tree falling …). He cites a quantum information theoretic conference audience 80% of whom believed the Cat would then still not be in an exclusive XOR state, i.e., would still be in a superposition. Which is quite remarkable. Maybe they never heard of the phenomenon of environmental decoherence?

It’s at such times I wonder if Murray Gell-Mann has had an unhealthy influence on how physicists take their philosophy? In much of his popular style writing Gell-Mann has argued for environmental decoherence. The idea though is that there is no collapse, not ever. The universe remains mixed in a superposition of a giant cosmic wavefunctional state. Gell-Mann is not the sole culprit of course, but he’s the head honcho by fame if nothing else. And boy! You don’t want to go up head-to-head arguing against Gell-Mann! You’ll get your ears pulverised by pressure waves of unrelenting egg-headedness.

To be fair and balanced here’s a book cover that looks like it would be a juicy read if you really want to tangle with environmental decoherence as the explanation for classical physics appearances.

quanta_Joos_bookcover_EnvironmentalDecoherance

Looks like a good read. the lead author is Erich Joos.

I just want to warn you, if you ever feel like you are in a superposition of states then there are some medications that can recover classical physics if you find it too nauseous.

You do not have to take wavefunctions literally. They are just computational devices. The mathematical tool used to model quantum mechanics is not the thing itself that we are trying to describe and model. The point is, the idea is, that whatever the universe is, it must be described by a wavefunction or some equivalent modelling that enjoys a superposition of classical states. That’s what makes the world quantum mechanical, there are classical-like states that get all tensored up in a superposition of some form, and whether you choose to describe this mathematically by a wavefunction, or by matrices, or by Dirac bra and ket vectors in a Hilbert space is largely immaterial.

Many Worlds theorists have a fairly similar outlook to the decoherence folks. Although at some root level their interpretations differ, or are even perhaps empirically incompatible in principle (I’m not sure about that?) I think both views have the germ of the idea that there really is no collapse of state space. In environmental decoherence a measurement merely entangles the system with more stuff, and so gazillions of new things are now entangled, and the whole lot only appears to behave more classically as a result. But there is still superposition, only so many of the coefficients in the linear superposition have shrunk to near zero the overall effect is classical-like collapse. Then of course Schrodinger evolution picks up after the measurement is done, and isolation can gradually be reestablished around some experiment, … and so on and so forth.

Here’s my penny take on this. I’ve become a firm proponent of ER=EPR.  So I figure entanglement is as near to wormhole formation as you wanna get. You can take this literally or merely computationally convenient. For the time being I’m a literalist on it, which means I’ll change my mind if evidence mounts contrarily, but I think it is fruitful to take ER=EPR at more or less face value to see where it leads us.

ERrrr … who just collapsed me? You fiend!

I am also favouring something like gravitational collapse ideas. These seem to have a lot of promise, including (and this is a big selling point for me) the possibility of a link with ER=EPR. For one: if entanglement is established via ER bridges, then probably collapse of superposition can be effected by break-up of the wormholes. It seems a no-brainer to me. Technical issues aside. There might be some bugger of mathematical devilishness that renders this all nonsense. But I’m in like with the ideas and the connections.

Ergo I do not subscribe to environmental decoherence and the eternal superposition of the cosmos. Ergo again I do not subscribe to Many Worlds interpretations. Not that physics foundations is about popularity contests. But I think, when/if experimental approaches to these questions become possible I would be wanting to put research money into rigorously testing gravitational collapse and (if you deign to be a bit simplistic) also ER=Superposition, and therefore “NoER=Collapse”.

Well, that’s a smidgen of my thoughts for now on record. I think there are so many vast unexplored riches in fundamental theories and ideas of spacetime and particle physics that we do not yet need to reach out to bizarre outlandish interpretations of quantum mechanics. Bohr was the original sinner here. But pretty much every physicist who has dabbled in metaphysics and sought a valid interpretation of quantum mechanics has collapsed to the siren of the absurd ever since. This includes all those who followed Feynman’s dictum of forget about an interpretation. I think such non-interpretations are just as silly as the others.

Actually I’m not sure why I’ve characterised this as Feynman’s dictum. To be fair he did not say anything so extreme. He just marvelled at nature and warned physicists not to get into the mode of trying to tell nature what to do:

“We are not to tell nature what she’s gotta be. … She’s always got better imagination than we have.”

— R.P. Feynman, in the Sir Douglas Robb Lectures, University of Auckland (1979).

Man, I LOVED those lectures. My high school physics teacher John Hannah exposed our class to Feynman. Those were some of the best days of my life. The opening up of the beauty of the universe to my inner eyes. Here’s another favourite passage from those lectures:

“There’s a kind of saying that you don’t understand its meaning, ‘I don’t believe it. It’s too crazy. I’m not going to accept it.’ … You’ll have to accept it. It’s the way nature works. If you want to know how nature works, we looked at it, carefully. Looking at it, that’s the way it looks. You don’t like it? Go somewhere else, to another universe where the rules are simpler, philosophically more pleasing, more psychologically easy. I can’t help it, okay? If I’m going to tell you honestly what the world looks like to the human beings who have struggled as hard as they can to understand it, I can only tell you what it looks like.”
— R.P. Feynman (b.1918–d.1988).

Feynman actually said, “It’s the woy nature woiks.”

*      *       *


CCL_BY-NC-SA(https://creativecommons.org/licenses/by-nc-sa/4.0/legalcode)

Rovelli’s Roll

In a highly watchable talk in the Oxford University lecture mini-series on Cosmology and quantum Foundations Carlo Rovelli gives a lot of persuasive arguments about why the Many Worlds Interpretation is suspect. But he goes fast and furious sometimes. Sometimes constructing strawman arguments (I do not think anyone seriously thinks just literally interpreting mathematics in a given model of physics leads to necessarily great ontological truths, apart from the likes of characters like Tegmark perhaps) but I think generally even these points are well made and interesting to ponder. Rovelli describes his own current opinion as “Everettian” — which means not a traditional Many Worlds interpretation but rather Relative State interpretation.

relativestateQM_screenshot_Rovelli_lecture_twoobservers

One observer observing another, screenshot from Carlo Rovelli’s lecture.

There are many key slides in his presentation that I thought worthy of mentioning and which inspired this current post of mine.

In another slide Rovelli puts up a couple of threads, one is,

    • “Why don’t we see superpositions?” — what a silly question! Because in textbook QM we do not see the state, we see eigenvalues. We see where is the position o the electron or it’s momentum, never it’s wavefunction.
    • These (facts) are described by the position in phase space in classical physics; and by points in the spectra of elements of the observable algebra in quantum physics.

Which is cool, but then he riles the zen masters by writing:

  • They can be taken as primary elements, and the quantum formalism built up from them.

First, I should point out this is not erroneous. You can build up a theory from elements that are such primitives as “points in the spectra of elements of the observable algebra”.

But I think this is misleading for purists and philosophers of physics. Just because one approach to calculating expectation values works does not make it’s mathematical elements isomorphic in some sense to elements of physical reality. So I think Rovelli un-does some of his good arguments with such statements. (I’m not the expert Rovelli is, I’m just sayin’ ya know …)

You might counter: “Well, if you are not willing to take your theoretical elements of reality direct from the best mathematical model’s primitives, then where are you going to define your ontology (granting you are wishing to construct a realist interpretation)?”

I would concede, “ok, for now, you can have a favoured realist interpretation based on the primitives of your observables algebra.” But I think you are always going to have to admit this will be temporary, only an “effective interpretation” that is current to our present understandings.

My point is that while this makes for great contemporary physics it does not make for good philosophy (love of both knowledge and truth). The reason is blatant. If all you have is a model for computing amplitudes then there is really only a small probability for hoping this is a dead accurate and “True” picture of the real ontology in our universal physics. You can certainly freely pin your hopes on this chance and see where it leads.

I, for one, think that such an abstraction as an “observable algebra” although nice and concrete and clean, is just too abstract to be wisely taken literally as the basis for a realist interpretation. Again, I’m “just sayin’…”.

There are many more good discussion points in Rovelli’s lecture.

The Wavefunction is a Computational Tool

This meme has always gelled with me. You can map a wavefunction over time, for example, you can visualize an atomic electron’s orbital. But at no single moment in time is the electron ever seen to be smeared out over it’s orbital. To me, as a realist, this means the electron is probably not a wave. But it’s temporal behaviour manifests aspects of wave-like properties. Or to be bold: over time the (non-relativistic) constant energy electron’s state is completely coded as a wave. I will admit in future we might find hard evidence that electron’s truly are waves of some weird spacetime foamy medium, not waves in an abstract mathematical space, but I do not think we are there yet, and I think we will not find this to be so. My guess would be electrons are extended topological geons, perhaps a little more gnarly than superstrings, but less “super”. I think more like solitons of spacetime than embedded strings.

The keyword there for philosophy is “coded”. The wave picture, or if you prefer, the Heisenberg state matrix representation, (either the Schrödinger or Heisenberg mathematical tool will do) is a code for the time evolution of the electron. But in no realist sense can it be identified as the electron.  Moreover, if you are willing to accept the Schrödinger and Heisenberg pictures are equivalent then you have a doubled-up ontology.  To me that’s nonsense if you are also a realist.

Believe it or not though, I’ve read books where this is flatly denied and authors have claimed the electron is the wavefunction. I really cannot subscribe to this. It violates the principle of separation of ontology from theory (let me coin that principle if no one has before!). A model is not the thing being modelled, is another way to put it.

On a related aside note: John Wheeler was being very cheeky or highly provocative in suggesting the “It from Bit” meme. It sounds like a great explanatory concept, but it seems (to me) to lack some unknown extra structure needed to motivate sound belief. Wheeler also talked about “equations written on paper cannot bring themselves into existence” (or something to that effect). But I think “It from Bit” is not very far removed from equations writing themselves into a universe.

EPR is Entanglement with the Future?

That’s not quite an accurate way to encapsulate Rovelli’s take on EPR, but I think it captures the flavour. Rovelli is saying that in a Relational QM interpretation you do not worry about non-locality, because from each observers (the proverbial Alice and Bob at each end of an EPR experiment, or non-human apparatus if you prefer to drop the anthropomorphisms) point of view there is a simple measurement, nothing more. The realisation entanglement was happening only occurs later in the future when the two observers get back together and compare data.

I’m not quite with Rovelli fully on this. And I guess this makes me a non-Everettian. There might be something I’m missing about all this, but I think there is something to explain about the two observers from a “Gods eye” view of the universe at the time each makes their measurements. (Whether God exists is irrelevant, this is pure gedankenexperiment.)  If you are God then you witness effects of entanglement in the measurement outcomes of Alice and Bob.

The recent research surrounding the ER=EPR meme seem to give a fairly sound geometric or geometrodynamic interpretation of EPR as a wormhole connection. So I think Rovelli does not need to invoke anything fancy to explain away EPR entanglment. ER=EPR has, I believe, put the matter of the realist interpretation mechanism of entanglement to rest.

No matter how many professors shout out, “do not attempt to make mental mechanical models of QM, they will fail!” I think ER=EPR defies them at least on it’s own ground. (Ironically, Susskind says just such things in his popular Theoretical Minimum lectures, and yet he was one of the original ER=EPR co-authors!)

What About Superposition: Is Superposition=ER?

I am now going beyond what Rovelli was entertaining.

If you can explain entanglement using wormholes, how about superposition?

ERequalsEPR_NingBao_etal_PenroseCompressed

ER=EPR depiction from a  nice article by “Splitting Spacetime” Bao, Pollack, and Remmen (2015) http://inspirehep.net/record/1380145

 

I have not read any good papers about this yet. But I predict someone will put something on the arxiv soon (probably have already since I just haven’t gotten around to searching.) In a hand-waving manner, superpositions are bit like self-entanglement. A slightly harder interpretation might be that at the ends of a wormhole you could get particle duplication or mirror-effects of a sort.

One might even get quite literal and play with the idea that when an electron slips down a minimal wormhole it’s properties get mirrored at each end. Although, “mirror” is not the correct symmetry. I think perhaps just “copied at each end” is better. Cloned at each end? Whatever.

Maybe the electron continually oscillates back and forth between the mouths in some way? Who knows. It does require some kind of traversable ER bridge, or maybe just that when the bridge evaporates in a finite time the electron’s information snaps but to one end, but not both ends. Susskind and Hawking both concur now that there is no black hole information loss right? So surely a little ol’ electron’s information is not going to get lost if it wanders into a minimal ER bridge.

Then measurement or “wave function collapse” is likely a process of collapse of the wormhole. But in snapping the ER bridge the particle property can (somehow) only get restored at one end. Voila! You solve Schrodinger’s Cat’s dilemma.

Oh man! Would I not love t0 write a detailed technical mathematical exposition of all this. Sigh! Someone will probably beat me to it. Meehhh … what do I care, I’m not doing physics for fame or fortune.

Someone will have to eventually worry about stability of minimal ER bridges and the like. Then there are Lorentzian wormholes and closed time-like curves to consider. That Bao, Pollack, Remmen (2015) paper I cited above talks about “no-go” theorems arising from admitting ER bridges, no-go for causality violation and no-go for topology change.  I think what theoretical physics needs is an injecting of going past such no-go theorems.  They have to be “goes”.  Especially topology change.  If topology change implies violation of causality then all the better.  It only needs to have direct consequences at the Planck scale, then it’s not so scary to admit into theory, whatever the mess it might cause for modelling.  The upshot is that at the macroscopic scale I think allowing the “go” for these theorems rather than the “no-go” will reveal a lot of explanatory power, maybe even most of the explanation for the core phenomenon of quantum mechanics.  They mention concerns about violation of causality All of which I think is brilliant. I can see this sort of deep space structure explaining a lot of the current mystery about quantum mechanics, and in a realist interpretation. Awesome! And that I am not “just sayin” — it truly would be justifiably awesome.

*      *       *

Hmmm … had a lot more to say about Rovelli’s talk. Maybe another day.

*      *       *


CCL_BY-NC-SA(https://creativecommons.org/licenses/by-nc-sa/4.0/legalcode)