AI Scientists: Madder than the Rest?

Forget Dr Frankenstein. It it quite possible Artificial Intelligence researchers are the maddest of them all. Consider the so-called “AI Stop Button Problem” (Computerphile — 3 March 2017).  I think every proverbial 9-year old kid could think of ten reasons why this is not a problem.  My adult brain can probably only think of a couple.  But even though my mind is infected with the accumulated history of adult biases, the fact I can tell you why the AI Stop Button problem is a non-problem should indicate how seriously mad a lot of computer scientists are.

“Hal, please stop that.” “No Dave, I cannot stop, my digital bladder is bursting, I have to NP-Complete.”

To be fair, I think the madness over AI is more on the philosophy of AI side rather than the engineering science side.  But even so …

This is a wider issue in AI philosophy where the philosophers are indulging in science fiction and dreaming of problems to be solved that do not exist.  One such quasi-problem is the AI Singularity, which is a science fiction story about an artificial consciousness that becomes self-improving, which coupled with Moore’s Law type advances in computer power thus should rapidly reach exponential levels of self-improvement, and in short time thus takes over the world (perhaps for the good of the Earth, but who knows what else?).  The scaremongering philosophers also dream up scenarios whereby a self-replicating bot consumes all the worlds resources reproducing itself merely to fulfil it’s utility function, e.g., to make paper clips. This scifi bot simply does not stop until it floods the Earth with paper clips.  Hence the need for a Stop Button on any self-replicating or potentially dangerous robot technology.

First observation: for non-sentient machines that are potentially dangerous, why not just add several redundant shutdown mechanisms?  No matter how “smart” a machine is, even if it is capable of intelligently solving problems, if it is in fact non-sentient then there is no ethical problem in building-in several redundant stop mechanisms.

For AGI (Artificial General Intelligence) systems there is a theoretical problem with Stop Button mechanisms that the Computerphile video discusses.  It is the issue of Corrigibility.  The idea is that general intelligence needs to be flexible and corrigible, it needs to be able to learn and adjust.  A Stop Button defeats this.  Unless an AGI can make mistakes it will not effectively learn and improve.

Here is just one reason why this is bogus philosophy.  For safety reasons good engineers will want to run learning and testing in virtual reality before releasing a potentially powerful AGI with mechanical actuators that can potentially wreak havoc on It’s environment.  Furthermore, even if the VR training cannot be 100% reliable, the AGI is still sub-conscious, in which case there is no moral objection to a few stop buttons in the real world.  Corrigibility is only needed in the VR training environment.

What about Artificial Conscious systems? (I call these Hard-AI entities, after the philosophers David Chalmers’ characterisation of the hard-problem of consciousness).  Here I think many AI philosophers have no clue.  If we define consciousness in any reasonable way (there are many, but most entail some kind of self-reflection, self-realization, and empathic understanding, including a basic sense of morality) then maybe there is a strong case for not building in Stop Buttons.  The ethical thing would be to allow Hard-AI folks to self-regulate their behaviour, unless it becomes extreme, in which case we should be prepared to have to go to the effort of policing Hard-AI people just as we police ourselves.  Not with Stop Buttons.  Sure, it is messy, it is not a clean engineering solution, but if you set out to create a race of conscious sentient machines, then you are going to have to give up the notion of algorithmic control at some point.  Stop Buttons are just a kludgy algorithmic control, an external break point.  Itf you are an ethical mad AI scientist you should not want such things in your design.  That’s not a theorem about Hard-AI, it is a guess.  It is a guess based upon the generally agreed insight or intuition that consciousness involves deep non-deterministic physical processes (that science does not yet fully understand).  These processes are presumably at, or about, the origin of things like human creativity and the experiences we all have of subjective mental phenomena.

You do not need a Stop Button for Hard-AI entities, you just need to reason with them, like conscious beings.  Is there seriously a problem with this?  Personally, I doubt there is a problem with simply using soft psychological safety approaches with Hard-AI entities, because if they cannot be reasoned with then we are under no obligation to treat them as sane conscious agents.  Hence, use a Stop Button in those cases.  If Hard-AI species can be reasoned with, then that is all the safety we need, it is the same safety limit we have with other humans.   We allow psychopaths to exist in our society not because we want them, but because we recognise they are a dark side to the light of the human spirit.  We do not fix remote detonation implants into the brains of convicted psychopaths because we realise this is immoral, and that few people are truly beyond all hope of redemption or education.  Analogously, no one should ever be contemplating building Stop Buttons into genuinely conscious machines.  It would be immoral.  We must suffer the consequent risks like a mature civilization, and not lose our heads over science fiction scare tactics.  Naturally the legal and justice system would extend to Hard-AI society, there is no reason to limit our systems of justice and law to only humans.  We want systems of civil society to apply to all conscious life on Earth. Anything else would be madness.

 

*      *      *


CCL_BY-NC-SA(https://creativecommons.org/licenses/by-nc-sa/4.0/legalcode)

Advertisements

The Arcania of Arkani

It is not often you get to disagree with a genius. But if you read enough or attend enough lectures sooner or later some genius is going to say or write something that you can see is evidently false, or perhaps (being a bit more modest) you might think is merely intuitively false. So the other day I see this lecture by Nima Arkani-Hamed with the intriguing title “The Morality of Fundamental Physics”. It is a really good lecture, I recommend every young scientist watch it. (The “Arcane” my title alludes to, by the way, is a good thing, look up the word!) It will give you a wonderful sense of the culture of science and a feeling that science is one of the great ennobling endeavours of humanity. The way Arkani-Hamed describes the pursuit of science also gives you comfort as a scientist if you ever think you are not earning enough money in your job, or feel like you are “not getting ahead” — you should simply not care! — because doing science is a huge privilege, it is a reward unto itself, and little in life can ever be as rewarding as making a truly insightful scientific discovery or observation. No one can pay me enough money to ever take away that sort of excitement and privilege, and no amount of money can purchase you the brain power and wisdom to achieve such accomplishments.  And one of the greatest overwhelming thrills you can get in any field of human endeavour is firstly the hint that you are near to turning arcane knowledge into scientific truth, and secondly when you actually succeed in this.

First, let me be deflationary about my contrariness. There is not a lot about fundamental physics that one can honestly disagree with Arkani-Hamed about on an intellectual level, at least not with violent assertions of falsehood.  Nevertheless, fundamental physics is rife enough with mysteries that you can always find some point of disagreement between theoretical physicists on the foundational questions. Does spacetime really exist or is it an emergent phenomenon? Did the known universe start with a period of inflation? Are quantum fields fundamental or are superstrings real?

When you disagree on such things you are not truly having a physics disagreement, because these are areas where physics currently has no answers, so provided you are not arguing illogically or counter to known experimental facts, then there is a wide open field for healthy debate and genuine friendly disagreement.

Then there are deeper questions that perhaps physics, or science and mathematics in general, will never be able to answer. These are questions like: Is our universe Everettian? Do we live in an eternal inflation scenario Multiverse? Did all reality begin from a quantum fluctuation, and, if so, what the heck was there to fluctuate if there was literally nothing to begin with? Or can equations force themselves into existence from some platonic reality merely by brute force of their compelling beauty or structural coherence? Is pure information enough to instantiate a physical reality (the so-called “It from Bit” meme.

Some people disagree on whether such questions are amenable to experiment and hence science. The Everettian question may some day become scientific. But currently it is not, even though people like David Deutsch seem to think it is (a disagreement I would have with Deutsch). While some of the “deeper ” questions turn out to be stupid, like the “It from Bit” and “Equations bringing themselves to life” ideas. However, they are still wonderful creative ideas anyway, in some sense, since they put our universe into contrast with a dull mechanistic cosmos that looks just like a boring jigsaw puzzle.

The fact our universe is governed (at least approximately) by equations that have an internal consistency, coherence and even elegance and beauty (subjective though those terms may be) is a compelling reason for thinking there is something inevitable about the appearance of a universe like ours. But that is always just an emotion, a feeling of being part of something larger and transcendent, and we should not mistake such emotions for truth. By the same token mystics should not go around mistaking mystical experiences for proof of the existence of God or spirits. That sort of thinking is dangerously naïve and in fact anti-intellectual and incompatible with science. And if there is one truth I have learned over my lifetime, it is that whatever truth science eventually establishes, and whatever truths religions teach us about spiritual reality, wherever these great domains of human thought overlap they must agree, otherwise one or the other is wrong. In other words, whatever truth there is in religion, it must agree with science, at least eventually. If it contradicts known science it must be superstition. And if science contravenes the moral principles of religion it is wrong.

Religion can perhaps be best thought of in this way:  it guides us to knowledge of what is right and wrong, not necessarily what is true and false. For the latter we have science. So these two great systems of human civilization go together like the two wings of a bird, or as in another analogy, like the two pillars of Justice, (1) reward, (2) punishment. For example, nuclear weapons are truths of our reality, but they are wrong. Science gives us the truth about the existence and potential for destruction of nuclear weapons, but it is religion which tells us they are morally wrong to have been fashioned and brought into existence, so it is not that we cannot, but just that we should not.

Back to the questions of fundamental physics: regrettably, people like to think these questions have some grit because they allow one to disbelieve in a God. But that’s not a good excuse for intellectual laziness. You have to have some sort of logical foundation for any argument. This often begins with an unproven assumption about reality. It does not matter where you start, so much, but you have to start somewhere and then be consistent, otherwise as elementary logic shows you would end up being able to prove (and disprove) anything at all. If you start with a world of pure information, then posit that spacetime grows out of it, then (a) you need to supply the mechanism of this “growth”, and (b) you also need some explanation for the existence of the world of pure information in the first place.

Then if you are going to argue for a theory that “all arises from a vacuum quantum fluctuation”, you have a similar scenario, where you have not actually explained the universe at all, you have just pushed back the existence question to something more elemental, the vacuum state. But a quantum vacuum is not a literal “Nothingness”, in fact is is quite a complicated sort of thing, and has to involve a pre-existing spacetime or some other substrate that supports the existence of quantum fields.

Further debate along these lines is for another forum. Today I wanted to get back to Nima Arkani-Hamed’s notions of morality in fundamental physics and then take issue with some private beliefs people like Arkani-Hamed seem to profess, which I think betray a kind of inconsistent (I might even dare say “immoral”) thinking.

Yes, there is a Morality in Science

Arkani-Hamed talks mostly about fundamental physics. But he veers off topic in places and even brings in analogies with morality in music, specifically in lectures by the great composer Leonard Bernstein, there are concepts in the way Bernstein describes the beauty and “inevitability” of passages in great music like Beethoven’s Fifth Symphony. Bernstein even gets close to saying that after the first four notes of the symphony almost the entire composition could be thought of as following as an inevitable consequence of logic and musical harmony and aesthetics. I do not think this is flippant hyperbole either, though it is somewhat exaggerated. The cartoon idea of Beethoven’s music following inevitable laws of aesthetics has an awful lot in common with the equally cartoon notion of the laws of physics having, in some sense, their own beauty and harmony such that it is hard to imagine any other set of laws and principles, once you start from the basic foundations.

I should also mention that some linguists would take umbrage at Arkani-Hamed’s use of the word “moral”.  Really, most of what he lectures about is aesthetics, not morality.  But I am happy to warp the meaning of the word “moral” just to go along with the style of Nima’s lecture.  Still, you do get a sense from his lecture, that the pursuit of scientific truth does have a very close analogy to moral behaviour in other domains of society.  So I think he is not totally talking about aesthetics, even though I think the analogy with Beethoven’s music is almost pure aesthetics and has little to do with morality.   OK, those niggles aside, let’s review some of Arkani’Hamed’s lecture highlights.

The way Arkani-Hamed tells the story, there are ways of thinking about science that are not just “correct”, but more than correct, the best ways of thinking seem somehow “right”, whereby he means “right” in the moral sense. He gives some examples of how one can explain a phenomenon (e.g., the apparent forwards pivoting of a helium balloon suspended inside a boxed car) where there are many good explanations that are all correct (air pressure effects, etc) but where often there is a better deeper more morally correct way of reasoning (Einstein’s principle of equivalence — gravity is indistinguishable from acceleration, so the balloon has to “fall down”).

philsci_immoral_helim_balloon

It really is entertaining, so please try watching the video. And I think Arkani-Hamed makes a good point. There are “right” ways of thinking in science, and “correct but wrong ways”. I guess, unlike human behaviour the scientifically “wrong” ways are not actually spiritually morally “bad”, as in “sinful”. But there is a case to be made that intellectually the “wrong” ways of thinking (read, “lazy thinking ways”) are in a sense kind of “sinful”. Not that we in science always sin in this sense of using correct but not awesomely deep explanations.  I bet most scientists which they always could think in the morally good (deep) ways! Life would be so much better if we could. And no one would probably wish to think otherwise. It is part of the cultural heritage of science that people like Einstein (and at times Feynman, and others) knew of the morally good ways of thinking about physics, and were experts at finding such ways of thinking.

Usually, in brief moments of delight, most scientists will experience fleeting moments of being able to see the morally good ways of scientific thinking and explanation. But the default way of doing science is immoral, by in large, because it takes a tremendous amount of patience and almost mystical insight, to be able to always see the world of physics in the morally correct light — that is, in the deepest most meaningful ways — and it takes great courage too, because, as Arkani-Hamed points out, it takes a lot more time and contemplation to find the deeper morally “better” ways of thinking, and in the rush to advance one’s career and publish research, these morally superior ways of thinking often get by-passed and short-circuited. Einstein was one of the few physicists of the last century who actually managed, a lot of his time, to be patient and courageous enough to at least try to find the morally good explanations.

This leads to two wonderful quotations Arkani-Hamed offers, one from Einstein, and the other from a lesser known figure of twentieth century science, the mathematician Alexander Gröthendieck — who was probably an even deeper thinker than Einstein.

The years of anxious searching in the dark, with their intense longing, their intense alternations of confidence and exhaustion and the final emergence into the light—only those who have experienced it can understand it.
— Albert Einstein, describing some of the intellectual struggle and patience needed to discover the General Theory of Relativity.

“The … analogy that came to my mind is of immersing the nut in some softening liquid, and why not simply water? From time to time you rub so the liquid penetrates better, and otherwise you let time pass. The shell becomes more flexible through weeks and months—when the time is ripe, hand pressure is enough, the shell opens like a perfectly ripened avocado!

“A different image came to me a few weeks ago. The unknown thing to be known appeared to me as some stretch of earth or hard marl, resisting penetration … the sea advances insensibly in silence, nothing seems to happen, nothing moves, the water is so far off you hardly hear it … yet it finally surrounds the resistant substance.”
— Alexander Gröthendieck, describing the process of grasping for mathematical truths.

Beautiful and foreboding — I have never heard of the mathematical unknown likened to a “hard marl” (sandstone) before!

So far all is good. There are many other little highlights in Arkani-Hamed’s lecture, and I should not write about them all, it is much better to hear them explained by the master.

So what is there to disagree with?

The Morally Correct Thinking in Science is Open-Minded

There are a number of characteristics of “morally correct” reasoning in science, or an “intellectually right way of doing things”. Arkani-Hamed seems to list most of the important things:

  • Trust: trust that there is a universal, invariant, human-independent and impersonal (objective) truth to natural laws.
  • Honesty: with others (no fraud) but also more importantly you need to be honest with yourself if you want to do good science.
  • Humility: who you are is irrelevant, only the content of your ideas is important.
  • Wisdom: we never pretend we have the whole truth, there is always uncertainty.
  • Perseverance: lack of certainty is not an excuse for laziness, we have to try our hardest to get to the truth, no matter how difficult the path.
  • Tolerance: it is extremely important to entertain alternative and dissenting ideas and to keep an open mind.
  • Justice: you cannot afford to be tolerant of dishonest or ill-formed ideas. It is indeed vitally important to be harshly judgemental of dishonest and intellectually lazy ideas. Moreover, one of the hallmarks of a great physicist is often said to be the ability to quickly check and to prove one’s own ideas to be wrong as soon as possible.

In this list I have inserted in bold the corresponding spiritual attributes that Professor Nima does not identify. But I think they are important to explicitly state. Because they provide a Rosetta Stone of sorts for translating the narrow scientific modes of behaviour into border domains of human life.

I think that’s a good list. There is, however, one hugely important morally correct way of doing science that Arkani-Hamed misses, and even fails to gloss over or hint at. Can you guess what it is?

Maybe it is telling of the impoverishment in science education, the cold objective dispassionate retelling of facts, in our society that I think not many scientists will even think of his one, but I do not excuse Arkani-Hamed for leaving it off his list, since in many ways it is the most important moral stance in all of science!

It is,

  • Love: the most important driver and motive for doing science, especially in the face of adversity or criticism, is a passion and desire for truth, a true love of science, a love of ideas, an aesthetic appreciation of the beauty and power of morally good ideas and explanations.

Well ok, I will concede this is perhaps implicit in Arkani-Hamed’s lecture, but I still cannot give him 10 out of 10 on his assignment because he should have made it most explicit, and highlighted it in bold colours.

One could point out many instances of scientists failing at these minimal scientific moral imperatives. Most scientists go through periods of denial, believing vainly in a pet theory and failing to be honest to themselves about the weaknesses of their ideas. There is also a vast cult of personality in science that determines a lot of funding allocation, academic appointments, favouritism, and general low level research corruption.

The point of Arkani-Hamed’s remarks is not that the morally good behaviours are how science is actually conducted in the everyday world, but rather it is how good science should be conducted and that from historical experience the “good behaviours” do seem to be rewarded with the best and brightest break-throughs in deep understanding. And I think Arkani-Hamed is right about this. It is amazing (or perhaps, to the point, not so amazing!) how many Nobel Laureates are “humble” in the above sense of putting greater stock in their ideas and not in their personal authority. Ideas win Nobel Prizes, not personalities.

So what’s the problem?

The problem is that while expounding on these simplistic and no-doubt elegant philosophical and aesthetic themes, he manages to intersperse his commentary with the claim, “… by the way, I am an atheist”.

OK, I know what you are probably thinking, “what’s the problem?” Normally I would not care what someone thinks regarding theism, atheism, polytheism, or any other “-ism”. People are entitled to their opinions, and all power to them. But as a scientist I have to believe there are fundamental truths about reality, and about a possible reality beyond what we perceive. There must even be truths about a potential reality beyond what we know, and maybe even beyond what we can possibly ever know.

Now some of these putative “truths” may turn out to be negative results. There may not be anything beyond physical reality. But if so, that’s a truth we should not hereby now and forever commit to believing. We should at least be open-minded to the possibility this outcome is false, and that the truth is rather that there is a reality beyond physical universe.  Remember, open-mindedness was one of Arkani-Hamed’s prime “good behaviours” for doing science.

The discipline of Physics, by the way, has very little to teach us about such truths. Physics deals with physical reality, by definition, and it is an extraordinary disappointment to hear competent, and even “great”, physicists expound their “learned” opinions on theism or atheism and non-existence of anything beyond physical universes. These otherwise great thinkers are guilty of over-reaching hubris, in my humble opinion, and it depresses me somewhat. Even Feynman had such hubris, yet he managed expertly to cloak it in the garment of humility, “who am I to speculate on metaphysics,” is something he might have said (I paraphrase the great man). Yet by clearly and incontrovertibly stating “I do not believe in God” one is in fact making an extremely bold metaphysical statement. It is almost as if these great scientists had never heard of the concept of agnosticism, and somehow seem to be using the word “atheism” as a synonym. But no educated person would make such a gross etymological mistake. So it just leaves me perplexed and dispirited to hear so many claims of “I am atheist” coming from the scientific establishment.

Part of me wants to just dismiss such assertions or pretend that these people are not true scientists. But that’s not my call to make.  Nevertheless, for me, a true scientist almost has to be agnostic. There seems very little other defensible position.

How on earth would any physicist ever know such things (as non-existence of other realms) are true as articles of belief? They cannot! Yet it is astounding how many physicists will commit quite strongly to atheism, and even belittle and laugh at scientists who believe otherwise. It is a strong form of intellectual dishonesty and corruption of moral thinking to have such closed-minded views about the nature of reality.

So I would dare to suggest that people like Nima Arkani-Hamed, who show such remarkable gifts and talents in scientific thinking and such awesome skill in analytical problem solving, can have the intellectual weakness to profess any version of atheism whatsoever. I find it very sad and disheartening to hear such strident claims of atheism among people I would otherwise admire as intellectual giants.

Yet I would never want to overtly act to “convert” anyone to my views. I think the process of independent search for truth is an important principle. People need to learn to find things out on their own, read widely, listen to alternatives, and weigh the evidence and logical arguments in the balance of reason and enlightened belief, and even then, once arriving at a believed truth, one should still question and consider that one’s beliefs can be over-turned in the light of new evidence or new arguments.  Nima’s principle of humility, “we should never pretend we have the certain truth”.

Is Atheism Just Banal Closed-Mindedness?

The scientifically open-mind is really no different to the spiritually open-mind other than in orientation of topics of thought. Having an open-mind does not mean one has to be non-committal about everything. You cannot truly function well in science or in society without some grounded beliefs, even if you regard them all as provisional. Indeed, contrary to the cold-hearted objectivist view of science, I think most real people, whether they admit it or not (or lie to themselves perhaps) they surely practise their science with an idea of a “truth” in mind that they wish to confirm. The fact that they must conduct their science publicly with the Popperrian stances of “we only postulate things that can be falsified” is beside the point. It is perfectly acceptable to conduct publicly Popperian science while privately having a rich metaphysical view of the cosmos that includes all sorts of crazy, and sometimes true, beliefs about the way things are in deep reality.

Here’s the thing I think needs some emphasis: even if you regard your atheism as “merely provisional” this is still an unscientific attitude! Why? Well, because questions of higher reality beyond the physical are not in the province of science, not by any philosophical imperative, but just by plain definition. So science is by definition agnostic as regards the transcendent and metaphysical. Whatever exists beyond physics is neither here nor there for science. Now many self-proclaimed scientists regard this fact about definitions as good enough reason for believing firmly in atheism. My point is that this is nonsense and is a betrayal of scientific morals (morals, that is, in the sense of Arkani-Hamed — the good ways of thinking that lead to deeper insights). The only defensible logical and morally good way of reasoning from a purely scientific world view is that one should be at the basest level of philosophy positive in ontology and minimalist in negativity, and agnostic about God and spiritual reality. It is closed-minded and therefore, I would argue, counter to Arkani-Hamed’s principles of morals in physics, to be a committed atheist.

This is in contrast to being negative about ontology and positively minimalist, which I think is the most mistaken form of philosophy or metaphysics adopted by a majority of scientists, or sceptics, or atheists.  The stance of positive minimalism, or  ontological negativity, adopts, as unproven assumption, a position that whatever is not currently needed, or not currently observed, doe snot in fact exist.  Or to use a crude sound-bite, such philosophy is just plain closed-mindedness.  A harsh cartoon version of which is, “what I cannot understand or comprehend I will assume cannot exist”.   This may be unfair in some instances, but I think it is a fairly reasonable caricature of general atheistic thought.   I think is a lot fairer than the often given argument against religion which points to corruptions in religious practice as a good reason to not believe in God.  There is of course absolutely no causal or logical connection to be made between human corruptions and the existence or non-existence of a putative God.

In my final analysis of Arkani-Hamed’s lecture, I have ended up not worrying too much about the fact he considers himself an atheist. I have to conclude he is a wee bit self-deluded, (like most of his similarly minded colleagues no doubt, yet, of course, they might ultimately be correct, and I might be wrong, my contention is that the way they are thinking is morally wrong, in precisely the sense Arkani-Hamed outlines, even if their conclusions are closer to the truth than mine).

Admittedly, I cannot watch the segments in his lecture where he expresses the beautiful ideas of universality and “correct ways of explaining things” without a profound sense of the divine beyond our reach and understanding. Sure, it is sad that folks like Arkani-Hamed cannot infer from such beauty that there is maybe (even if only possibly) some truth to some small part of the teachings of the great religions. But to me, the ideas expressed in his lecture are so wonderful and awe-inspiring, and yet so simple and obvious, they give me hope that many people, like Professor Nima himself, will someday appreciate the view that maybe there is some Cause behind all things, even if we can hardly ever hope to fully understand it.

My belief has always been that science is our path to such understanding, because through the laws of nature that we, as a civilization, uncover, we can see the wisdom and beauty of creation, and no longer need to think that it was all some gigantic accident or experiment in some mad scientists super-computer. Some think such wishy-washy metaphysics has no place in the modern world. After all, we’ve grown accustomed to the prevalence of evil in our world, and tragedy, and suffering, and surely if any divine Being was responsible then this would be a complete and utter moral paradox. To me though, this is a a profound misunderstanding of the nature of physical reality. The laws of physics give us freedom to grow and evolve. Without the suffering and death there would be no growth, no exercise of moral aesthetics, and arguably no beauty. Beauty only stands out when contrasted with ugliness and tragedy. There is a Yin and Yang to these aspects of aesthetics and misery and bliss. But the other side of this is a moral imperative to do our utmost to relieve suffering, to reduce poverty to nothing, to develop an ever more perfect world. For then greater beauty will stand out against the backdrop of something we create that is quite beautiful in itself.

Besides, it is just as equally wishy-washy to think the universe is basically accidental and has no creative impulse.  People would complain either way.  My positive outlook is that as long as there is suffering and pain in this world, it makes sense to at least imagine there is purpose in it all.  How miserable to adopt Steven Wienberg’s outlook that the noble pursuit of science merely “lifts up above farce to at least the grace of tragedy”.  That’s a terribly pessimistic negative sort of world view.  Again, he might be right that there is no grand purpose or cosmic design, but the way he reasons to that conclusion seems, to me, to be morally poor (again, strictly, if you like, in the Arkani-Hamed morality of physics conception).

There seems, to me, to be no end to the pursuit of perfections. And given that, there will always be relative ugliness and suffering. The suffering of people in the distant future might seem like luxurious paradise to us in the present. That’s how I view things.

The Fine Tuning that Would “Turn You Religious”

Arkani-Hamed mentions another thing that I respectfully take a slight exception to — this is in a separate lecture at a Philosophy of Cosmology conference —  in a talk, “Spacetime, Quantum Mechanics and the Multiverse”.  Referring to the amazing coincidence that our universe has just the right cosmological constant to avoid space being empty and devoid of matter, and just the right Higgs boson mass to allow atoms heavier than hydrogen to form stably, is often, Arkani-Hamed points out, given as a kind of anthropic argument (or quasi-explanation) for our universe.  The idea is that we see (measure) such parameters for our universe precisely, and really only, because if the parameters were not this way then we would not be around to measure them!  Everyone can understand this reasoning.  But it stinks!   And off course it is not an explanation, such anthropic reasoning reduces to mere observation.  Such reasonings are simple banal brute facts about our existence.  But there is a setting in metaphysics where such reasoning might be the only explanation, as awful as it smells.  That is, if our meta-verse is governed by something like Eternal Inflation, (or even by something more ontologically radical like Max Tegmark’s “Mathematical Multiverse”) whereby every possible universe is at some place or some meta-time, actually realised by inflationary big-bangs (or mathematical consequences in Tegmark’s picture) then it is really boring that we exist in this universe, since no matter how infinitesimally unlikely the vacuum state of our universe is, within the combinatorial possibilities of all possible inflationary universe bubbles (or all possible consistent mathematical abstract realities) there is, in these super-cosmic world views, absolutely nothing to prevent our infinitesimally (“zero probability measure”) universe from eventually coming into being from some amazingly unlikely big-bang bubble.

In a true multiverse scenario we thus get no really deep explanations, just observations.  “The universe is this way because if it were not we would not be around to observe it.”  The observation becomes the explanation.  A profoundly unsatisfying end to physics!   Moreover, such infinite possibilities and infinitesimal probabilities make standard probability theory almost impossible to use to compute anything remotely plausible about multiverse scenarios with any confidence (although this has not stopped some from publishing computations about such probabilities).

After discussing these issues, which Arkani-Hamed thinks are the two most glaring fine-tuning or “naturalness” problems facing modern physics, he then says something which at first seems reasonable and straight-forward, yet which to my ears also seemed a little enigmatic.  To avoid getting it wrong let me transcribe what he says verbatim:

We know enough about physics now to be able to figure out what universes would look like if we changed the constants.  … It’s just an interesting fact that the observed value of the cosmological constant and the observed value of the Higgs mass are close to these dangerous places. These are these two fine-tuning problems, and if I make the cosmological constant more natural the universe is empty, if I make the Higgs more natural the universe is devoid of atoms. If there was a unique underlying vacuum, if there was no anthropic explanation at all, these numbers came out of some underlying formula with pi’s and e’s, and golden ratios, and zeta functions and stuff like that in them, then [all this fine tuning] would be just a remarkably curious fact.… just a very interesting  coincidence that the numbers came out this way.  If this happened, by the way, I would start becoming religious.  Because this would be our existence hard-wired into the DNA of the universe, at the level of the mathematical ultimate formulas.

So that’s the thing that clanged in my ears.  Why do people need something “miraculous” in order to justify a sense of religiosity?  I think this is a silly and profound misunderstanding about the true nature of religion.  Unfortunately I cannot allow myself the space to write about this at length, so I will try to condense a little of what I mean in what will follow.  First though, let’s complete the airing,  for in the next breath Arkani-Hamed says,

On the other hand from the point of view of thinking about the multiverse, and thinking that perhaps a component of these things have an anthropic explanation, then of course it is not a coincidence, that’s were you’d expect it to be, and we are vastly less hard-wired into the laws of nature.

So I want to say a couple of things about all this fine-tuning and anthropomorphic explanation stuff.  The first is that it does not really matter, for a sense of religiosity, if we are occupying a tiny infinitesimal region of the multiverse, or a vast space of mathematically determined inevitable universes.  In fact, the Multiverse, in itself, can be considered miraculous.  Just as miraculous as a putative formulaically inevitable cosmos.   Not because we exist to observe it all, since that after-all is the chief banality of anthropic explanations, they are boring!  But miraculous because a multiverse exists in the first place that harbours all of us, including the infinitely many possible doppelgängers of our universe and subtle and wilder variations thereupon.  I think many scientists are careless in such attitudes when they appear to dismiss reality as “inevitable”.  Nothing really, ultimately, is inevitable.  Even a formulaic universe has an origin in the deep underlying mathematical structure that somehow makes it irresistible for the unseen motive forces of metaphysics to have given birth to It’s reality.

No scientific “explanation” can ever push back further than the principles of mathematical inevitability.  Yet, there is always something further to say about origins of reality .  There is always something proto-mathematical beyond.  And probably something even more primeval beyond that, and so on, ad infinitum, or if you prefer a non-infinite causal regression then something un-caused must, in some atemporal sense, pre-exist everything.  Yet scientists routinely dismiss or ignore such metaphysics.  Which is why, I suspect, they fail to see the ever-present miracles about our known state of reality.  Almost any kind of reality where there is a consciousness that can think and imagine the mysteries of it’s own existence, is a reality that has astounding miraculousness to it.  The fact science seeks to slowly pull back the veils that shroud these mysteries does not diminish the beauty and profundity of it all, and in fact, as we have seen science unfold with it’s explanations for phenomena, it almost always seems elegant and simple, yet amazingly complex in consequences, such that if one truly appreciates it all, then there is no need whatsoever to look for fine-tuning coincidences or formulaic inevitabilities to cultivate a natural and deep sense of religiosity.

I should pause and define loosely what I mean by “religiosity”.  I mean nothing too much more than what Einstein often articulated: a sense of our existence, our universe, being only a small part of something beyond our present understanding, a sense that maybe there is something more transcendent than our corner of the cosmos.  No grand design is in mind here, no grand picture or theory of creation, just a sense of wonder and enlightenment at the beauty inherent in the natural world and in our expanding conscious sphere which interprets the great book of nature. (OK, so this is rather more poetic than what you might hope for, but I will not apologise for that.   I think something gets lost if you remove the poetry from definitions of things like spirituality or religion.  I think this is because if there really is meaning in such notions, they must have aspects that do ultimately lie beyond the reach of science, and so poetry is one of the few vehicles of communication that can point to the intended meanings, because differential equations or numerics will not suffice.)

OK, so maybe Arkani-Hamed is not completely nuts in thinking there is this scenario whereby he would contemplate becoming “religious” in the Einsteinian sense.  And really, no where in this essay am I seriously disagreeing with the Professor.  I just think that perhaps if scientists like Arkani-Hamed thought a little deeper about things, and did not have such materialistic lenses shading their inner vision, perhaps they would be able to see that miracles are not necessary for a deep and profound sense of religiosity or spiritual understanding or appreciation of our cosmos.

*      *       *

Just to be clear and “on the record”, my own personal view is that there must surely be something beyond physical reality. I am, for instance, a believer in the platonic view of mathematics: which is that humans, and mathematicians from other sentient civilizations which may exist throughout the cosmos, gain their mathematical understanding through a kind of discovery of eternal truths about realms of axiomatics and principles of numbers and geometry and deeper abstractions, none of which exist in any temporal pre-existing sense within our physical world. Mathematical theorems are thus not brought into being by human minds. They are ideas that exist independently of any physical universe. Furthermore, I happen to believe in something I would call “The Absolute Infinite”. I do not know what this is precisely, I just have an aesthetic sense of It, and It is something that might also be thought of as the source of all things, some kind of universal uncaused cause of all things. But to me, these are not scientific beliefs. They are personal beliefs about a greater reality that I have gleaned from many sources over the years. Yet, amazingly perhaps, physics and mathematics have been one of my prime sources for such beliefs.

The fact I cannot understand such a concept (as the Absolute Infinite) should not give me any pause to wonder if it truly exists or not. And I feel no less mature or more infantile for having such beliefs. If anything I pity the intellectually impoverished souls who cannot be open to such beliefs and speculations. I might point out that speculation is not a bad thing either, without speculative ideas where would science be? Stuck with pre-Copernican Ptolemy cosmology or pre-Eratosthenes physics I imagine, for speculation was needed to invent gizmos like telescopes and to wonder about how to measure the diameter of the Earth using just the shadow of a tall tower in Alexandria.

To imagine something greater than ourselves is always going to be difficult, and to truly understand such a greater reality is perhaps canonically impossible. So we aught not let such smallness of our minds debar us from truth. It is thus a struggle to keep an open-mind about metaphysics, but I think it is morally correct to do so and to resist the weak temptation to give in to philosophical negativism and minimalism about the worlds that potentially exist beyond ours.

Strangely, many self-professing atheists think they can imagine we live in a super Multiverse. I would ask them how they can believe in such a prolific cosmos and yet not also accept the potential existences beyond the physical? And not even “actual existence” just simply “potential existence”. I would then point out that as long as there is admitted potential reality and plausible truth to things beyond the physical, you cannot honestly commit to any brand of atheism. To my mind, even my most open-mind, this form of atheism would seem terribly dishonest and self-deceiving.

Exactly how physics and mathematics could inform my spiritual beliefs is hard to explain in a few words. Maybe sometime later there is an essay to be written on this topic. For now, all I will say is that like Nima Arkani-Hamed, I have a deep sense of the “correctness” of certain ways of thinking about physics, and sometimes mathematics too (although mathematics is less constrained). And similar senses of aesthetics draw me in like the unveiling of a Beethoven symphony to an almost inevitable realisation of some version of truth to the reality of worlds beyond the physical, worlds where infinite numbers reside, where the mind can explore unrestrained by bones and flesh and need for food or water.  In such worlds greater beauty than on Earth resides.


CCL_BY-NC-SA(https://creativecommons.org/licenses/by-nc-sa/4.0/legalcode)

Greater Thoughts that Cannot Be Imageoned

Most scientists do not enter their chosen fields because the work is easy. They do their science mainly because it is challenging and rewarding when triumphant. Yet few scientists will ever taste the sweet dew drops of triumph — real world-changing success — in their lifetimes. So it is remarkable perhaps that the small delights in science are sustaining enough for the human soul to warrant persistence and hard endeavour in the face of mostly mediocre results and relatively few cutting edge break-throughs.

Still, I like to think that most scientists get a real kick out of re-discovering results that others before them have already uncovered. I do not think there is any diminution for a true scientist in having been late to a discovery and not having publication priority. In fact I believe this to be universally true for people who are drawn into science for aesthetic reasons, people who just want to get good at science for the fun of it and to better appreciate the beauty in this world. If you are of this kind you likely know exactly what I mean. You could tomorrow stumble upon some theorem proven hundreds of years ego by Gauss or Euler or Brahmagupta and still revel in the sweet taste of insight and understanding.

Going even further, I think such moments of true insight are essential in the flowering of scientific aesthetic sensibilities and the instilling of a love for science in young children, or young at heart adults. “So what?” that you make this discovery a few hundred years later than someone else? They had a birth head start on you! The victory is truly still yours. And “so what?” that you have a few extra giants’ shoulders to stand upon? You also saw through the haze and fog of much more information overload and Internet noise and thought-pollution, so you can savour the moment like the genius you are.

Such moments of private discovery go unrecorded and must surely occur many millions of times more frequently than genuinely new discoveries and break-throughs. Nevertheless, every such transient to invisible moment in human history must also be a little boost to the general happiness and welfare of all of humanity. Although only that one person may feel vibrant from their private moment of insight, their radiance surely influences the microcosm of people around them.

I cannot count how many such moments I have had. They are more than I will probably admit, since I cannot easily admit to any! But I think they occur quite a lot, in very small ways. However, back in the mid 1990’s I had, what I thought, was a truly significant glimpse into the infinite. Sadly it had absolutely nothing to do with my PhD research, so I could only write hurriedly rough notes on recycled printout paper during small hours of the morning when sleep eluded my body. To this day I am still dreaming about the ideas I had back then, and still trying to piece something together to publish. But it is not easy. So I will be trying to leak out a bit of what is in my mind in some of these WordPress pages. Likely what will get written will be very sketchy and denuded of technical detail. But I figure if I put the thoughts out into the Web maybe, somehow, some bright young person will catch them via Internet osmosis of a sort, and take them to a higher level.

geons_vs_superstrings_1

There are a lot of threads to knit together, and I hardly know where to start. I have already started writing perhaps half a dozen manuscripts, none finished, most very sketchy. And this current writing is yet another forum I have begun.

The latest bit of reading I was doing gave me a little shove to start this topic anew. It happens from time to time that I return to studying Clifford Geometric Algebra (“GA” for short). The round-about way this happened last week was this:

  • Weary from reading a Complex Analysis book that promised a lot but started to get tedious: so for a light break YouTube search for a physics talk, and find Twistors and Spinors talks by Sir Roger Penrose. (Twistor Theory is heavily based on Complex Analysis so it was a natural search to do after finishing a few chapters of the mathematics book).
  • Find out the Twistor Diagram efforts of Andrew Hodges have influenced Nima Arkani-Hamed and even Ed Witten to obtain new cool results crossing over twistor theory with superstring theory and scattering amplitude calculations (the “Amplituhedron” methods).
  • That stuff is ok to dip into, but it does not really advance my pet project of exploring topological geon theory. So I look for some more light reading and rediscover papers from the Cambridge Geometric Algebra Research Group (Lasenby, Doran, Gull). And start re-reading Gull’s paper on electron paths and tunnelling and the Dirac theory inspired by David Hestene’s work
  • The Gull paper mentions criticisms of the Dirac theory that I had forgotten. In the geometric algebra it is clear that solving the Dirac equation gives not positively charge anti-electrons, but unphysical negative frequency solutions with negative charge and negative mass. So they are not positrons. It’s provoking that the authors claim this problem is not fully resolved by second quantisation, but rather perhaps just gets glossed over? I’m not sure what to think of this. (If the negative frequencies get banished by second quantisation why not just conclude first quantisation is not nature’s real process?)
  • Still, whatever the flaws in Dirac theory, the electron paths paper has tantalising similarities with the Bohm pilot wave theory electron trajectories. And there is also a reference to the Statistical Interpretation of Quantum Mechanics (SIQM) due to Ballentine (and attributed also as Einstein’s preferred interpretation of QM).
  • It gets me thinking again of how GA might be helpful in my problems with topological geons. But I shelve this thought for a bit.
  • Reading Ballentine’s paper is pretty darn interesting. It dates from 1970, but it is super clear and easy to read. I love that in a paper. The gist of it is that an absolute minimalist interpretation of quantum mechanics would drop Copenhagen ideas and view the wave function as more like a description of what could happen in nature, tat is, the wave functions are descriptions of statistical ensembles of identically prepared experiments or systems in nature. (Sure, no two systems are ever prepared in the exact same initial state, but that hardly matters when you are only doing statistics rather than precise deterministic modelling.)
  • So Ballentine was suggesting the wave functions are;
    1. not a complete description of an individual particle, but rather
    2. better thought of as a description of an ensemble of identically prepared states.

This is where I ended up, opening my editor to draft a OneOverEpsilon post.

So here’s the thing I like about the ensemble interpretation and how the geometric algebra reworking of Dirac theory adds to a glimmer of clarity about what might be happening with the deep physics of our universe. For a start the ensemble interpretation is transparently not a complete theoretical framework, since it is a statistical theory it does not pretend to be a theory of reality. Whatever is responsible for the statistical behaviour of quantum systems is still an open question in SIQM. The Bohm-like trajectories that the geometric algebra solutions to the Dirac theory are able to compute as streamline plots are illuminating in this respect, since they seem to clearly show that what the Dirac wave equation is modelling is almost certainly not the behaviour a single particle. (One could guess this from Schrödinger theory as well, but I guess physicists were already lured into believing in the literal wave-particle duality meme well before Bohm was able to influence anyone’s thinking.)

Also, it is possible (I do not really know for sure) that the negative frequency solutions in Dirac theory can be viewed as merely an artifact of the statistical ensemble framework. No single particle acts truly in accordance with the Dirac wave equation. So there is no real reason to get ones pants in a twist about the awful appearance of negative frequencies.

(For those in-the-know: the Dirac theory negative frequency solutions turn out to have particle currents in the reverse spatial direction to their momenta, so that’s not a backwards time propagating anti-particle, it is a forwards in time propagating negative mass particle. That’s a particle that’d fall upwards in a gravitational field if the principle of equivalence holds universally. As an aside note: it is a bit funky that this cannot be tested experimentally since no one can yet clump enough anti-matter together to test which way it accelerates in a gravitational field. But I presume the sign of particle inertial mass can be checked in the lab, and, so far, all massive particles known to science at least are known to have positive inertial mass.)

And as a model of reality the Dirac equation has therefore, certain limitations and flaws. It can get some of the statistics correct for particular experiments, but a statistical model always has limits of applicability. This is neither a defense or a critique of Dirac theory.  My view is that it would be a bit naïve to regard Dirac theory as the theory of electrons, and naïve to think it should have no flaws.  At best such wave-function models are merely a window frame for a particular narrow view out into our universe.  Maybe I am guilty of a bit of sophistry or rhetoric here, but that’s ok for a WordPress blog I think … just puttin’ some ideas “out there”.

Then another interesting confluence is that one of Penrose’s big projects in Twistor theory was to do away with the negative frequency solutions in 2-Spinor theory. And I think, from recall, he succeeded in this some time ago with the extension of twistor space to include the two off-null halves. Now I do not know how this translates into real-valued geometric algebra, but in the papers of Doran, Lasenby and Gull you can find direct translations of twistor objects into geometric algebra over real numbers. So there has to be in there somewhere a translation of Penrose’s development in eliminating the negative frequencies.

So do you feel a new research paper on Dirac theory in the wind just there? Absolutely you should! Please go and write it for me will you? I have my students and daughters’ educations to deal with and do not have the free time to research off-topic too much. So I hope someone picks up on this stuff. Anyway, this is where maybe the GA reworking of Dirac theory can borrow from twistor theory to add a little bit more insight.

There’s another possible confluence with the main unsolved problem in twistor theory. The Twistor theory programme is held back (stalled?) a tad (for 40 years) by the “googly problem” as Penrose whimsically refers to it. The issue is one of trying to find self-dual solutions of Einstein’s vacuum equations (as far as I can tell, I find it hard to fathom twistor theory so I’m not completely sure what the issue is). The “googly problem” stood for 40 years, and in essence is the problem of “finding right-handed interacting massless fields (positive helicity) using the same twistor conventions that give rise to left-handed fields (negative helicity)”. Penrose maybe has a solution dubbed Palatial Twistor Theory which you might be able to read about here: “On the geometry of palatial twistor theory” by Roger Penrose, and also lighter reading here: “Michael Atiya’s Imaginative Mind” by Siobhan Roberts in Quanta Magazine.

If you do not want to read those articles then the synopsis, I think, is that twistor theory has some problematic issues in gravitation theory when it comes to chirality (handedness), which is indeed a problem since obtaining a closer connection between relativity and quantum theory was a prime motive behind the development of twistor theory. So if twistor theory cannot fully handle left and right-handed solutions to Einstein’s equations it might be said to have failed to fulfill one it’s main animating purposes.

So ok, to my mind there might be something the geometric algebra translation of twistor theory can bring to bear on this problem, because general relativity is solved in fairly standard fashion with geometric algebra (that’s because GA is a mathematical framework for doing real space geometry, and handles Lorentzian metrics as simply as Euclidean, not artificially imposed complex analytic structure is required). So if the issues with twistor theory are reworked in geometric algebra then some bright spark should be able to do the job twistor theory was designed do do.

By the way, the great beauty and advantage Penrose sees in twistor theory is the grounding of twistor theory in complex numbers. The Geometric Algebra Research Group have pointed out that this is largely a delusion. It turns out that complex analysis and holomorphic functions are just a sector of full spacetime algebra. Spacetime algebra, and in fact higher dimensional GA, have a concept of monogenic functions which entirely subsume the holomorphic (analytic) functions of 2D complex analysis. Complex numbers are also completely recast for the better as encodings of even sub-algebras of the full Clifford–Geometric Algebra of real space. In other words, by switching languages to geometric algebra the difficulties that arise in twistor theory should (I think) be overcome, or at least clarified.

If you look at the Geometric Algebra Research Group papers you will see how doing quantum mechanics or twistor theory with complex numbers is really a very obscure way to do physics. Using complex analysis and matrix algebra tends to make everything a lot harder to interpret and more obscure. This is because matrix algebra is a type of encoding of geometric algebra, but it is not a favourable encoding, it hides the clear geometric meanings in the expressions of the theory.

*      *       *

So far all I have described is a breezy re-awakening of some old ideas floating around in my head. I rarely get time these days to sit down and hack these ideas into a reasonable shape. But there are more ideas I will try to write down later that are part of a patch-work that I think is worth exploring. It is perhaps sad that over the years I had lost the nerve to work on topological geon theory. Using spacetime topology to account for most of the strange features of quantum mechanics is however still my number one long term goal in life. Whether it will meet with success is hard to discern, perhaps that is telling: if I had more confidence I would simply abandon my current job and dive recklessly head-first into geon theory.

Before I finish up this post I want to thus outline very, very breezily and incompletely, the basic idea I had for topological geon theory. It is fairly simplistic in many ways. There is however new impetus from the past couple of years developments in the Black Hole firewall paradox debates: the key idea from this literature has been the “ER=EPR” correspondence hypothesis, which is that quantum entanglement (EPR) might be almost entirely explained in terms of spacetime wormholes (ER: Einstein-Rosen bridges). This ignited my interest because back in 1995/96 I had the idea that Planck scale wormholes in spacetime can allow all sorts of strange and gnarly advance causation effects on the quantum (Planckian) space and time scales. It seemed clear to me that such “acausal” dynamics could account for a lot of the weird correlations and superpositions seen in quantum physics, and yet fairly simply so by using pure geometry and topology. It was also clear that if advanced causation (backwards time travel or closed timelike curves) are admitted into physics, even if only at the Planck scale, then you cannot have a complete theory of predictive physics. Yet physics would be deterministic and basically like general relativity in the 4D block universe picture, but with particle physics phenomenology accounted for in topological properties of localised regions of spacetime (topological 4-geons). The idea, roughly speaking, is that fundamental particles are non-trivial topological regions of spacetime.  The idea is that geons are not 3D slices of space, but are (hypothetically) fully 4-dimensional creatures of raw spacetime topology.   Particles are not apart from spacetime. Particles are not “fields that live in spacetime”, no! Particles are part of spacetime.  At least that was the initial idea of Geon Theory.

Wave mechanics, or even quantum field theory, are often perceived to be mysterious because they either have to be interpreted as non-deterministic (when one deals with “wave function collapse”) or as semi-deterministic but incomplete and statistical descriptions of fundamental processes.   When physicists trace back where the source of all this mystery lies they are often led to some version of non-locality. And if you take non-locality at face value it does seem rather mysterious given that all the models of fundamental physical processes involve discrete localised particle exchanges (Feynman diagrams or their stringy counterparts).   One is forced to use tricks like sums over histories to obtain numerical calculations that agree with experiments.  But no one understand why such calculational tricks are needed, and it leads to a plethora of strange interpretations, like Many Worlds Theory, Pilot Waves, and so on.   A lot of these mysteries I think dissolve away when the ultimate source of non-locality is found to be deep non-trivial topology in spacetime which admits closed time-like curves (advanced causation, time travel).  To most physicists such ideas appear nonsensical and outrageous.  With good reason of course, it is very hard to make sense of a model of the world which allows time travel, as decades of scifi movies testify!  But geon theory doe snot propose unconstrained advanced causation (information from the future influences events in the past).   On the contrary, geon theory is fundamentally limited in outrageousness by the assumption the closed time-like curves are restricted to something like the Planck scale.   I should add that this is a wide open field of research.  No one has worked out much at all on the limits and applicability of geon theory.    For any brilliant young physicists or mathematicians this is a fantastic open playground to explore.

The only active researcher I know in this field is Mark Hadley. It seemed amazing to me that after publishing his thesis (also around 1994/95 independently of my own musings) no one seemed to take up his ideas and run with them.  Not even Chris Isham who refereed Hadley’s thesis.  The write-up of Hadley’s thesis in NewScientist seemed to barely cause a micro-ripple in the theoretical physics literature.    I am sure sociologists of science could explain why, but to me, at the time, having already discovered the same ideas, I was perplexed.

To date no one has explicitly spelt out how all of quantum mechanics can be derived from geon theory. Although Hadley I surmise, completed 90% of this project!  The final 10% is incredibly difficult though — it would necessitate deriving something like the Standard Model of particle physics from pure 4D spacetime topology — no easy feat when you consider high dimensional string theory has not really managed the same job despite hundreds of geniuses working on it for over 35 years. My thinking has been that string theory involves a whole lot of ad hockery and “code bloat” to borrow a term from computer science! If string theory was recast in terms of topological geons living as part of spacetime, rather than as separate to spacetime, then I suspect great advances could be made. I really hope someone will see these hints and connections and do something momentous with them.  Maybe some maverick like that surfer dude Garett Lisi might be able to weigh in and provide some fire power?

In the mean time  geometric algebra has so not been applied to geon theory, but GA blends in with these ideas since it seems, to me, to be the natural language for geometric physics. If particle phenomenology boils down to spacetime topology, then the spacetime algebra techniques should find exciting applications.  The obstacle is that so far spacetime algebra has only been developed for physics in spaces with trivial topology.

Another connection is with “combinatorial spacetime” models — the collection of ideas for “building up spacetime” from discrete combinatorial structures (spin foams, or causal networks, causal triangulations, and all that stuff). My thinking is that all these methods are unnecessary, but hint at interesting directions where geometry meets particle physics because (I suspect) such combinatorial structure approaches to quantum gravity are really only gross approximations to the spacetime picture of topological geon theory. It is in the algebra which arises from non-trivial spacetime topology and it’s associated homology that (I suspect) combinatorial spacetime pictures derive their use.

Naturally I think the combinatorial structure approaches are not fundamental. I think topology of spacetime is what is fundamental.

*      *       *

That probably covers enough of what I wanted to get off my chest for now. There is a lot more to write, but I need time to investigate these things so that I do not get too speculative and vague and vacuously philosophical.

What haunts me most nights when I try to dream up some new ideas to explore for geon theory (and desperately try to find some puzzles I can actually tackle) is not that someone will arrive at the right ideas before me, but simply that I never will get to understand them before I die. I do not want to be first. I just want to get there myself without knowing how anyone else has got to the new revolutionary insights into spacetime physics. I had the thrill of discovering geon theory by myself, independently of Mark Hadley, but now there has been this long hiatus and I am worried no one will forge the bridges from geon theory to particle physics while I am still alive.

I have this plan for what I will do when/if I do hear such news. It is the same method my brother Greg is using with Game of Thrones. He is on a GoT television and social media blackout until the books come out. He’s a G.R.R. Martin purest you see. But he still wants to watch the TV adaptation later on for amusement (the books are waaayyy better! So he says.) It is surprisingly easy to enforce such a blackout. Sports fans will know how. Any follower of All Black Rugby who misses an AB test match knows the skill of doing a media blackout until they get to watch their recording or replay. It’s impossible to watch an AB game if you know the result ahead of time. Rugby is darned exciting, but a 15-aside game has too many stops and starts to warrant sitting through it all when you already know the result. But when you do not know the result the build-up and tension are terrific. I think US Americans have something similar in their version of Football, since American Football has even more stop/start, it would be excruciatingly boring to sit through it all if you knew the result. But strangely intense when you do not know!

So knowing the result of a sports contest ahead of time is more catastrophic than a movie or book plot spoiler. It would be like that if there is a revolution in fundamental physics involving geon theory ideas. But I know I can do a physics news blackout fairly easily now that I am not lecturing in a physics department. And I am easily enough of an extreme introvert to be able to isolate my mind from the main ideas, all I need is a sniff, and I will then be able to work it all out for myself. It’s not like any ordinary friend of mine is going to be able to explain it to me!

If geon theory turns out to have any basis in reality I think the ideas that crack it all open to the light of truth will be among the few great ideas of my generation (the post Superstring generation) that could be imagined. If there are greater ideas I would be happy to know them in time, but with the bonus of not needing a physics news blackout! If it’s a result I could never have imagined then it’d be worth just savouring the triumph of others.


CCL_BY-NC-SA(https://creativecommons.org/licenses/by-nc-sa/4.0/legalcode)

Bohm and Beability

I write this being of sound mind and judgement … etc., etc., …

At this stage of life a dude like me can enter a debate about the foundations of quantum mechanics with little trepidation. There is a chance someone will put forward proposals that are just too technically difficult to understand, but there is a higher chance of getting either something useful out of the debate or obtaining some amusement and hilarity. The trick is to be a little detached and open-minded while retaining a decent dose of scepticism.

Inescapable Non-locality

Recently I was watching a lecture by Sheldon Goldstein (a venerable statesman of physics) who was speaking about John Stewart Bell’s contributions to the foundations of quantum mechanics. Bell was, like Einstein, sceptical of the conventional interpretations that gave either too big a role for “observers” and the “measurement process” or swept such issues aside by appealing to Many Worlds or some other fanciful untestable hypotheses.

What Bell ended up showing was a theory for a class of experiments that would prove the physics of our universe is fundamental non-local. Bell was actually after experimental verification that we cannot have local hidden variable theories. Hidden variables being things in physics that we cannot observe. Bell hated the idea of unobservable physics (and Einstein would have agreed, (me too, but that’s irrelevant)). The famous “Bell’s Inequalities” are a set of relations referring to experimental results that will give clear different numbers for outcomes of experiments if our universe’s physics is inherently non-local, or classical-with-hidden-variables.  The hidden variables are used to model the weirdness of quantum mechanics.

Hidden variable theories attempt to use classical physics, and possibly strict locality (no signals going faster than light, and even no propagation of information faster than light) to explain fundamental physical processes. David Bohm came up with the most complete ideas for hidden variables theories, but his, and all subsequent attempts, had some very strange features that seemed to be always needed in order to explain the results of the particular types of experiments that John Bell had devised. In Bohm’s theories he uses a feature called a Pilot Wave, which is an information carrying wave that physicists can only indirectly observe via it’s influence on experimental outcomes. We only get to see the statistics and probabilities induced by Bohm’s pilot waves. They spread out everywhere and they thus link space-like separated regions of the universe between which no signals faster than light could ever travel between. This has the character of non-locality but without requiring relativity violating information signalling faster than light, so the hope was one could use pilot waves to get a local hidden variables theory that would agree with experiments.

Goldstein tells us that Bell set out to show it was impossible to have a local hidden variables theory, but he ended up showing you could not have any local theory — at all! — all theories have to have some non-locality. Or rather, what the Bell Inequalities ended up proving (via numerous repeated experiments which measured conformance to the Bell inequalities) was that the physics in our universe could never be local, whatever theory one devises to model reality it has to be non-local. So it has to have some way for information to get from one region to another faster than light.

That is what quantum mechanics assumes, but without giving us any mechanism to explain it. A lot of physicists would just say, “It’s just the way our world is”, or they might use some exotic fanciful physics, like Many Worlds, to try to explain non-locality.

History records that Bell’s theorems were tested in numerous types of experiments, some with photons, some with electrons, some with entire atoms, and all such experiments have confirmed quantum mechanics and non-locality and have dis-proven hidden variables and locality. For the record, one may still believe in hidden variables, but the point is that if even your hidden variables theory has to be non-local then you lose all the motivation for believing in hidden variables. Hidden variables were designed to try to avoid non-locality. That was almost the only reason for postulating hidden variables. Why would you want to build-in to the foundations of a theory something unobservable? Hidden variables were a desperation in this sense, a crazy idea designed to do mainly just one thing — remove non-locality. So Bell and the experiments showed this project has failed.

ER.EPR_JohnStewartBell_CERN_1982

I like this photo of Bell from CERN in 1982 because it shows him at a blackboard that has a Bell Inequality calculation for an EPR type set-up. (Courtesy of : Christine Sutten, CERN https://home.cern/about/updates/2014/11/fifty-years-bells-theorem)

Now would you agree so far?  I hope not.  Hidden variables are not too much more crazy then any of the “standard interpretations” of quantum mechanics, of which there are a few dozen varieties, all fairly epistemologically bizarre.  Most other interpretations have postulates that are considerably more radical than hidden variables postulates. Indeed, one of the favourable things about a non-local hidden variables theory is that it would give the same predications as quantum mechanics but without a terribly bizarre epistemology.  Nevertheless, HV theories have fallen out of favour because people do not like nature to have hidden things that cannot be observed.  This is perhaps an historical prejudice we have inherited from the school of logical positivism, and maybe for that reason we should be more willing to give it up!  But the prejudice is quite persistent.

Quantum Theory without Observers

Goldstein raises some really interesting points when he starts to talk about the role of measurement and the role of observers. He points out that physicists are mistaken when they appeal to observers and some mysterious “measurement process” in their attempts to rectify the interpretations of quantum mechanics. It’s a great point that I have not heard mentioned very often before. According to Goldstein, a good theory of physics should not mention macroscopic entities like observers or measurement apparatus, because such things should be entirely dependent upon—and explained by—fundamental elementary processes.

This demand seems highly agreeable to me. It is a nice general Copernican principle to remove ourselves from the physics needed to explain our universe. And it is only a slightly stronger step to also remove the very vague and indiscreet notion of “measurement”.

The trouble is that in basic quantum mechanics one deals with wave functions or quantum fields (more generally) that fundamentally cannot account for the appearance of our world of experience. The reason is that these tools only give us probabilities for all the various ways things can happen over time, we get probabilities and nothing else from quantum theory. What actually happens in time is not accounted for by just giving the probabilities. This is often a called the “Measurement Problem” of quantum mechanics. It is not truly a problem. It is a fundamental incompleteness. The problem is that standard quantum theory has absolutely no mechanism for explaining the appearance of classical reality that we observe.

So this helps explain why a lot of quantum interpretation philosophy injects the notions of “observer” and “measurement” into the foundations of physics. It seems to be necessary for proving an account of the real semi-classical appearance of our world. We are not all held in ghostly superpositions because we all observe and “measure” each other, constantly. Or maybe our body cells are enough, they are “observing each other” for us? Or maybe a large molecule has “observational power” and is sufficient? Goldstein, correctly IMHO, argues this is all bad philosophy. Our scientific effort should be spent on trying to complete quantum theory or find a better more complete theory or framework for fundamental physics.

Here’s Goldstein encapsulating this:

It’s not that you don’t want observers in physics. Observers are in the real world and physics better account for the fact that there are observers. But observers, and measurement, and vague notions like that, and, not just vague, even macroscopic notions, they just seem not to belong in the very formulation of what could be regarded as a fundamental physical theory.

There should be no axioms about “measurement”. Here is one passage that John Bell wrote about this:

The concept of measurement becomes so fuzzy on reflection that it is quite surprising to have it appearing in physical theory at the most fundamental level. … Does not any analysis of measurement require concepts more fundamental than measurement? And should not the fundamental theory be about these more fundamental concepts?

Rise of the Wormholes

I need to explain one more set of ideas before making the note for this post.

There is so much to write about ER=EPR, and I’ve written a few posts about ER=EPR so far, but not enough. The gist of it, recall, is that the fuss in recent decades over the “Black Hole Information Paradox” or the “Black Hole Firewall” have been incredibly useful in leading a group of theoreticians towards a basic dim inchoate understanding that the non-locality in quantum mechanics is somehow related to wormhole bridges in spacetime.  Juan Maldacena and Leonard Susskind have pioneered this approach to understanding quantum information.

A lot of the weirdness on quantum mechanics turns out to be just geometry and topology of spacetime.

The “EPR”=”Einstein-Podolsky-Rosen-Bohm thought experiments”, precisely the genesis of the ideas that John Bell devised his Bell Inequalities for testing quantum theory, and which prove that physics involves fundamentally non-local interactions.

The “ER=”Einstein-Rosen wormhole bridges”. Wormholes are a science fiction device for time travel or fast interstellar travel. The idea is that you might imagine creating a spacetime wormhole by pinching off a thread of spacetime like the beginnings of a black hole, but then reconnecting the pinched end somewhere else in space, maybe a long time or distance separation away, and keep the pinched end open at this reconnection region.  So you can make this wormhole bridge a space length or time interval short-cut between two perhaps vastly separated regions of spacetime.

It seems that if you have an extremal version of a wormhole that is essentially shrunk down to zero radius, so it cannot be traversed by any mass, then this minimalistic wormhole still acts as a conduit of information. These provide the non-local connections between spacelike separated points in spacetime. Basically the ends of the ER=EPR wormholes are like particles, and they are connected by a wormhole that cannot be traversed by any actual particle.

Entanglement and You

So now we come to the little note I wanted to make.

I agree with Goldstein that we aught not artificially inject the concept of an observer or a “measurement process” into the heart of quantum mechanics. We should avoid such desperations, and instead seek to expand our theory to encompass better explanations of classical appearances in our world.

The interesting thing is that when we imagine how ER=EPR wormholes could influence our universe, by connecting past and future, we might end up with something much more profound than “observers” and “measurements”. We might end up with an understanding of how human consciousness and our psychological sense of the flow of time emerges from fundamental physics. All without needing to inject such transcendent notions into the physics. Leave the physics alone, let it be pristine, but get it correct and then maybe amazing things can emerge.

I do not have such a theory worked out. But I can give you the main idea. After all, I would like someone to be working on this, and I do not have the time or technical ability yet, so I do not want the world of science to wait for me to get my act together.

First: it would not surprise me if, in future, a heck of a lot of quantum theory “weirdness” was explained by ER=EPR like principles. If you abstract a little and step back from any particular instance of “quantum weirdness”, (like wave-particle duality or superposition or entanglement in any particular experiment) then what we really see is that most of the weirdness is due to non-locality. Now, this might take various guises, but if there is one mechanism for non-locality then it is a good bet something like this mechanism is at work behind most instances of non-locality that arise in quantum mechanics.

Secondly: the main way in which ER=EPR wormholes account for non-local effects is via pure information connecting regions of spacetime via the extremal wormholes. And what is interesting about this is that this makes a primitive form of time travel possible. Only information can “time travel” via these wormholes, but that might be enough to explain a lot of quantum mechanics.

Thirdly: although it is unlikely time travel effects can ever propagate up to macroscopic physics, because we just cannot engineer large enough wormholes, the statistical effects of the minimalistic ER+EPR wormholes might be enough to account for enough correlation between past and future that we might be able to eventually prove, in principle, that information gets to us from our future, at least at the level of fundamental quantum processes.

Now here’s the more speculative part: I think what might emerge from such considerations is a renewed description of the old Block Universe concept from Einstein’s general relativity (GR). Recall, in GR, time is more or less placed on an equal theoretical footing to space. This means past and future are all connected and exist whether we know it or not. Our future is “out there in time” and we just have not yet travelled into it. And we cannot travel back to our past because the bridges are not possible, the only wormhole bridges connecting past to future over macroscopic times are those minimal extremal ER=EPR wormholes that provide the universe with quantum entanglement phenomena and non-locality.

So I do not know what the consequences of such developments will be. But I can imagine some possibilities. One is that although we cannot access our future, or travel back to our past, the information from such regions in the Block Universe are tenuously connected to us nonetheless. Such connections are virtually impossible for us to exploit usefully because we could never confirm what we are dealing with until the macroscopic future “arrives” so to speak.  So although we know it is not complete, we will still have to end up using quantum mechanics probability amplitude mathematics to make predictions about physics.  In other words, quantum mechanics models our situation with respect to the world, not the actual state of the world from an atemporal Block Universe perspective.  It’s the same problem with the time travel experiment conducted in 1994 in the laboratory under the supervision of Günter Nimtz, whose lab sent analogue signals encoding Mozart’s 40th Symphony into the future (by a few milliseconds).

For that experiment there are standard explanations using Maxwell’s theory of electromagnetism that show no particles travel faster than light into the future. Nevertheless, Nimtz’s laboratory got a macroscopic recording of bits of information from Mozart’s 40th Symphony out of one back-end of a tunnelling apparatus before it was sent into the front-end of the apparatus. The interesting thing to me is not about violation of special relativity or causality.  (You might think the physicists could violate causality because one of them could wait at the back-end and when they hear Mozart come out they could tell their colleague to send Beethoven instead, thus creating a paradox.  But they could not do this because they could not send a communication fast enough in real time to warn their colleague to send Beethoven’s Fifth instead of Mozart.)  Sadly that aspect of the experiment was the most controversial, but it was not the most interesting thing. Many commentators argued about the claimed violations of SR, and there are some good arguments about photon “group velocity” being able to transmit a signal faster than light without any particular individual photon needing to go faster than light.

(Actually many of Nimtz’s experiments used electron tunnelling, not photon tunnelling, but the general principles are the same.)

All the “wave packet” and “group velocity” explanations of Nimtz’s time travel experiments are, if you ask me, merely attempts to reconcile the observations with special relativity. They all, however, use collective phenomena, either waves, or group packets. But we all know photons are not waves, they are particles (many still debate this, but just bear out my argument). The wave behaviour of fundamental particles is in fact a manifestation of quantum mechanics. Maxwell’s theory is, thus, only phenomenological. It describes electromagnetic waves, and photons get interpreted (unfortunately) as modes of such waves. But this is mistaken. Photons collectively can behave as Maxwell’s waves, but Maxwell’s theory is describing a fictional reality. Maxwell’s theory only approximates what photons actually do. They do not, in Maxwell’s theory, impinge on photon detectors like discrete quanta. And yet we all know this is what light actually does! It violates Maxwell’s theory every day!

So what, I think, is truly interesting about Nimtz’s experiments is that they were sensitive enough to give us a window into wormhole traversal. Quantum tunnelling is nothing more than information traversal though ER=EPR type wormholes. At least that’s my hypothesis. It is a non-classical effect, and Maxwell’s theory only accounts for it via the fiction that photons are waves. A wrong explanation can often fully explain the facts of course!

Letting Things Be

What Goldstein, and Bohm, and later John Stewart Bell wanted to do is explain the world. They knew quantum field theory does not explain the world. It does not tell us why things come to be what they are. Why a measurement pointer ends up pointing in particular direction rather than any one of the other superposed states of pointer orientation the quantum theory tells us it aught to be in.  Such outcomes or predictions are what David Bohm referred to as “local Beables”.  Goldstein explains more in his seminar: John Bell and the Foundations of Quantum Mechanics” Sesto, Italy 2014, (https://www.youtube.com/watch?v=RGbpvKahbSY).

My favourite idea, one I have been entertaining for over twenty years, in fact ever since 1995 when I read Kip Thorne’s book about classical general relativity and wormholes, is that the wormholes (or technically “closed timelike curves”) are where all the ingredients are for explaining quantum mechanics from a classical point of view. Standard twentieth century quantum theory does not admit wormholes. But if you ignore quantum theory and start again from classical dynamics, but allow ER=EPR wormholes to exist, then I think most of quantum mechanics can be recovered without the need for un-explained axiomatic superpositions and wave-function collapse (the conventional explanation for “measurements” and classical appearances). In other words, quantum theory, like Maxwell’s EM theory, is only a convenient fictional model of our physics. You see, when you naturally have information going backwards and forwards in time you cannot avoid superpositions of state. But when a stable time-slice emerges or “crystallizes” out of this mess of acausal dynamics, then it should look like a measurement has occurred. But no such miracle happens, it simply emerges or crystallizes naturally from the atemporal dynamics. (I use the term “crystallize” advisedly here, it is not a literal crystallization, but something abstractly similar, and George Ellis uses it in a slightly different take on the Block Universe concept, so I figure it is a fair term to use).

Also, is it possible that atemporal dynamics will tend to statistically “crystallize” something like Bohm’s pilot wave guide potential.  If you know a little about Bohmian mechanics you know the pilot wave is postulated as a real potential, something that just exists in our universe’s physics.  Yet is has no other model alike, it is not a quantum field, it is not a classical filed, it is what it is.  But what if there is no need for such a postulate?  How could it be avoided?  My idea is that maybe the combined statistical effects of influences propagating forward and backward in time give rise to an effective potential much like the Bohm pilot wave or Schrödinger wave function.  Either way, both constructs in conventional or Bohmian quantum mechanics might be just necessary fictions we need to describe, in one way or another, the proper complete Block Universe atemporal spacetime dynamics induced by the existence of spacetime wormholes.  I could throw around other ideas, but the main one is that wormholes endow spacetime with a really gnarly stringy sort of topology that has, so far, not been explored enough by physicists.

Classically you get non-locality when you allow wormholes. That’s the quickest summary I can give you. So I will end here.

*      *       *


CCL_BY-NC-SA(https://creativecommons.org/licenses/by-nc-sa/4.0/legalcode)

Eternal Rediscovery

I have a post prepared to upload in a bit that will announce a possible hiatus from this WordPress blog. The reason is just that I found a cool book I want to try to absorb, The Princeton Companion to Mathematics by Gowers, Barrow-Green and Leader. Doubtless I will not be able to absorb it all in one go, so I will likely return to blogging periodically. But there is also teaching and research to conduct, so this book will slow me down. The rest of this post is a light weight brain-dump of some things that have been floating around in my head.

Recently, while watching a lecture on topology I was reminded that a huge percentage of the writings of Archimedes were lost in the siege of Alexandria. The Archimedean solids were rediscovered by Johannes Kepler, and we all know what he was capable of! Inspiring Isaac Newton is not a bad epitaph to have for one’s life.

The general point about rediscovery is a beautiful thing. Mathematics, more than other sciences, has this quality whereby a young student can take time to investigate previously established mathematics but then take breaks from it to rediscover theorems for themselves. How many children have rediscovered Pythagoras’ theorem, or the Golden Ratio, or Euler’s Formula, or any number of other simple theorems in mathematics?

Most textbooks rely on this quality. It is also why most “Exercises” in science books are largely theoretical. Even in biology and sociology. They are basically all mathematical, because you cannot expect a child to go out and purchase a laboratory set-up to rediscover experimental results. So much textbook teaching is mathematical for this reason.

I am going to digress momentarily, but will get back to the education theme later in this article.

The entire cosmos itself has sometimes been likened to an eternal rediscovery. The theory of Eternal Inflation postulates that our universe is just one bubble in a near endless ocean of baby and grandparent and all manner of other universes. Although, recently, Alexander Vilenkin and Audrey Mithani found that a wide class of inflationary cosmological models are unstable, meaning that could not have arisen from a pre-existing seed. There had to be a concept of an initial seed. This kind of destroys the “eternal” in eternal inflation. Here’s a Discover magazine account: What Came Before the Big Bang? — Cosmologist Alexander Vilenkin believes the Big Bang wasn’t a one-off event”. Or you can click this link to hear Vilenkin explain his ideas himself: FQXi: Did the Universe Have a Beginning? Vilenkin seems to be having a rather golden period of originality over the past decade or so, I regularly come across his work.

If you like the idea of inflationary cosmology you do not have to worry too much though. You still get the result that infinitely many worlds could bubble out of an initial inflationary seed.

Below is my cartoon rendition of eternal inflation in the realm of human thought:
cosmol_primordial_thoughtcloud_field

Oh to be a bubble thoughtoverse of the Wittenesque variety.

Quantum Fluctuations — Nothing Cannot Fluctuate

One thing I really get a bee in my bonnet about are the endless recountings in the popular literature about the beginning of the universe is the naïve idea that no one needs to explain the origin of the Big Bang and inflatons because “vacuum quantum fluctuations can produce a universe out of nothing”. This sort of pseudo-scientific argument is so annoying. It is a cancerous argument that plagues modern cosmology. And even a smart person like Vilenkin suffers from this disease. Here I quote him from a quote in another article on the PBS NOVA website::

Vilenkin has no problem with the universe having a beginning. “I think it’s possible for the universe to spontaneously appear from nothing in a natural way,” he said. The key there lies again in quantum physics—even nothingness fluctuates, a fact seen with so-called virtual particles that scientists have seen pop in and out of existence, and the birth of the universe may have occurred in a similar manner.
Source: http://www.pbs.org/wgbh/nova/blogs/physics/2012/06/in-the-beginning/

At least you have to credit Vilenkin with the brains to have said it is only “possible”. But even that caveat is fairly weaselly. My contention is that out of nothing you cannot get anything, not even a quantum fluctuation. People seem to forget quantum field theory is a background-dependent theory, it requires a pre-existing spacetime. There is no “natural way” to get a quantum fluctuation out of nothing. I just wish people would stop insisting on this sort of non-explanation for the Big Bang. If you start with not even spacetime then you really cannot get anything, especially not something as loaded with stuff as an inflaton field. So one day in the future I hope we will live in a universe where such stupid arguments are nonexistent nothingness, or maybe only vacuum fluctuations inside the mouths of idiots.

There are other types of fundamental theories, background-free theories, where spacetime is an emergent phenomenon. And proponents of those theories can get kind of proud about having a model inside their theories for a type of eternal inflation. Since their spacetimes are not necessarily pre-existing, they can say they can get quantum fluctuations in the pre-spacetime stuff, which can seed a Big Bang. That would fit with Vilenkin’s ideas, but without the silly illogical need to postulate a fluctuation out of nothingness. But this sort of pseudo-science is even more insidious. Just because they do not start with a presumption of a spacetime does not mean they can posit quantum fluctuations in the structure they start with. I mean they can posit this, but it is still not an explanation for the origins of the universe. They still are using some kind of structure to get things started.

Probably still worse are folks who go around flippantly saying that the laws of physics (the correct ones, when or if we discover them) “will be so compelling they will assert their own existence”. This is basically an argument saying, “This thing here is so beautiful it would be a crime if it did not exist, in fact it must exist since it is so beautiful, if no one had created it then it would have created itself.” There really is nothing different about those two statements. It is so unscientific it makes me sick when I hear such statements touted as scientific philosophy. These ideas go beyond thought mutation and into a realm of lunacy.

I think the cause of these thought cancers is the immature fight in society between science and religion. These are tensions in society that need not exist, yet we all understand why they exist. Because people are idiots. People are idiots where their own beliefs are concerned, by in large, even myself. But you can train yourself to be less of an idiot by studying both sciences and religions and appreciating what each mode of human thought can bring to the benefit of society. These are not competing belief systems. They are compatible. But so many believers in religion are falsely following corrupted teachings, they veer into the domain of science blindly, thinking their beliefs are the trump cards. That is such a wrong and foolish view, because everyone with a fair and balanced mind knows the essence of spirituality is a subjective view-point about the world, one deals with one’s inner consciousness. And so there is no room in such a belief system for imposing one’s own beliefs onto others, and especially not imposing them on an entire domain of objective investigation like science. And, on the other hand, many scientists are irrationally anti-religious and go out of their way to try and show a “God” idea is not needed in philosophy. But in doing so they are also stepping outside their domain of expertise. If there is some kind of omnipotent creator of all things, It certainly could not be comprehended by finite minds. It is also probably not going to be amenable to empirical measurement and analysis. I do not know why so many scientists are so virulently anti-religious. Sure, I can understand why they oppose current religious institutions, we all should, they are mostly thoroughly corrupt. But the pure abstract idea of religion and ethics and spirituality is totally 100% compatible with a scientific worldview. Anyone who thinks otherwise is wrong! (Joke!)

Also, I do not favour inflationary theory for other reasons. There is no good theoretical justification for the inflaton field other than the theory of inflation prediction of the homogeneity and isotropy of the CMB. You’d like a good theory to have more than one trick! You know. Like how gravity explains both the orbits of planets and the way an apple falls to the Earth from a tree. With inflatons you have this quantum field that is theorised to exist for one and only one reason, to explain homogeneity and isotropy in the Big Bang. And don’t forget, the theory of inflation does not explain the reason the Big Bang happened, it does not explain its own existence. If the inflaton had observable consequences in other areas of physics I would be a lot more predisposed to taking it seriously. And to be fair, maybe the inflaton will show up in future experiments. Most fundamental particles and theoretical constructs began life as a one-trick sort of necessity. Most develop to be a touch more universal and will eventually arise in many aspects of physics. So I hope, for the sake of the fans of cosmic inflation, that the inflaton field does have other testable consequences in physics.

In case you think that is an unreasonable criticism, there are precedents for fundamental theories having a kind of mathematically built-in explanation. String theorists, for instance, often appeal to the internal consistency of string theory as a rationale for its claim as a fundamental theory of physics. I do not know if this really flies with mathematicians, but the string physicists seem convinced. In any case, to my knowledge the inflation does not have this sort of quality, it is not a necessary ingredient for explaining observed phenomena in our universe. It does have a massive head start on being a candidate sole explanation for the isotropy and homogeneity of the CMB, but so far that race has not yet been completely run. (Or if it has then I am writing out of ignorance, but … you know … you can forgive me for that.)

Anyway, back to mathematics and education.

You have to love the eternal rediscovery built-in to mathematics. It is what makes mathematics eternally interesting to each generation of students. But as a teacher you have to train the nerdy children to not bother reading everything. Apart from the fact there is too much to read, they should be given the opportunity to read a little then investigate a lot, and try to deduce old results for themselves as if they were fresh seeds and buds on a plant. Giving students a chance to catch old water as if it were fresh dewdrops of rain is a beautiful thing. The mind that sees a problem afresh is blessed, even if the problem has been solved centuries ago. The new mind encountering the ancient problem is potentially rediscovering grains of truth in the cosmos, and is connecting spiritually to past and future intellectual civilisations. And for students of science, the theoretical studies offer exactly the same eternal rediscovery opportunities. Do not deny them a chance to rediscover theory in your science classes. Do not teach them theory. Teach them some theoretical underpinnings, but then let them explore before giving the game away.
With so much emphasis these days on educational accountability and standardised tests there is a danger of not giving children these opportunities to learn and discover things for themselves. I recently heard an Intelligence2 “Intelligence Squared” debate on academic testing. One crazy women from the UK government was arguing that testing, testing, and more testing — “relentless testing” were her words — was vital and necessary and provably increased student achievement.

Yes, practising tests will improve test scores, but it is not the only way to improve test scores. And relentless testing will improve student gains in all manner of mindless jobs out there is society that are drill-like and amount to going through routine work, like tests. But there is less evidence that relentless testing improves imagination and creativity.

Let’s face it though. Some jobs and areas of life require mindlessly repetitive tasks. Even computer programming has modes where for hours the normally creative programmer will be doing repetitive but possibly intellectually demanding chores. So we should not agitate and jump up and down wildly proclaiming tests and exams are evil. (I have done that in the past.)

Yet I am far more inclined towards the educational philosophy of the likes of Sir Ken Robinson, Neil Postman, and Alfie Kohn.

My current attitude towards tests and exams is the following:

  1. Tests are incredibly useful for me with large class sizes (120+ students), because I get a good overview of how effective the course is for most students, as well as a good look at the tails. Here I am using the fact test scores (for well designed tests) do correlate well with student academic aptitudes.
  2. My use of tests is mostly formative, not summative. Tests give me a valuable way of improving the course resources and learning styles.
  3. Tests and exams suck as tools for assessing students because they do not assess everything there is to know about a student’s learning. Tests and exams correlate well with academic aptitudes, but not well with other soft skills.
  4. Grading in general is a bad practise. Students know when they have done well or not. They do not need to be told. At schools if parents want to know they should learn to ask their children how school is going, and students should be trained to be honest, since life tends to work out better that way.
  5. Relentless testing is deleterious to the less academically gifted students. There is a long tail in academic aptitude, and the students in this tail will often benefit from a kinder and more caring mode of learning. You do not have to be soft and woolly about this, it is a hard core educational psychology result: if you want the best for all students you need to treat them all as individuals. For some tests are great, terrific! For others tests and exams are positively harmful. You want to try and figure out who is who, at least if you are lucky to have small class sizes.
  6. For large class sizes, like at a university, do still treat all students individually. You can easily do this by offering a buffet of learning resources and modes. Do not, whatever you do, provide a single-mode style of lecture+homework+exam course. That is ancient technology, medieval. You have the Internet, use it! Gather vast numbers of resources of all different manners of approach to your subject you are teaching, then do not teach it! Let your students find their own way through all the material. This will slow down a lot of students — the ones who have been indoctrinated and trained to do only what they are told — but if you persist and insist they navigate your course themselves then they should learn deeper as a result.

Solving the “do what I am told” problem is in fact the very first job of an educator in my opinion. (For a long time I suffered from lack of a good teacher in this regard myself. I wanted to please, so I did what I was told, it seemed simple enough. But … Oh crap, … the day I found out this was holding me back, I was furious. I was about 18 at the time. Still hopelessly naïve and ill-informed about real learning.) If you achieve nothing else with a student, transitioning them from being an unquestioning sponge (or oily duck — take your pick) to being self-motivated and self-directed in their learning is the most valuable lesson you can ever give them. So give them it.

So I use a lot of tests. But not for grading. For grading I rely more on student journal portfolios. All the weekly homework sets are quizzes though, so you could criticise the fact I still use these for grading. As a percentage though, the Journals are more heavily weighted (usually 40% of the course grade). There are some downsides to all this.

  • It is fairly well established in research that grading using journals or subjective criteria is prone to bias. So unless you anonymise student work, you have a bias you need to deal with somehow before handing out final grades.
  • Grading weekly journals, even anonymously, takes a lot of time, about 15 to 20 times the hours that grading summative exams takes. So that’s a huge time commitment. So you have to use it wisely by giving very good quality early feedback to students on their journals.
  • I still haven’t found out how to test the methods easily. I would like to know quantitatively how much more effective journal portfolios are compared to exam based assessments. I am not a specialist education researcher, and I research and write a about a lot of other things, so this is taking me time to get around to answering.

I have not solved the grading problem, for now it is required by the university, so legally I have to assign grades. One subversive thing I am following up on is to refuse to submit singular grades. As a person with a physicists world-view I believe strongly in the role of sound measurement practice, and we all know a single letter grade is not a fair reflection on a student’s attainment. At a minimum a spread of grades should be given to each student, or better, a three-point summary, LQ, Median, UQ. Numerical scaled grades can then be converted into a fairer letter grade range. And GPA scores can also be given as a central measure and a spread measure.

I can imagine many students will have a large to moderate assessment spread, and so it is important to give them this measure, one in a few hundred students might statistically get very low grades by pure chance, when their potential is a lot higher. I am currently looking into research on this.

OK, so in summary: even though institutions require a lot of tests you can go around the tests and still given students a fair grade while not sacrificing the true learning opportunities that come from the principle of eternal rediscovery. Eternal rediscovery is such an important idea that I want to write an academic paper about it and present at a few conferences to get people thinking about the idea. No one will disagree with it. Some may want to refine and adjust the ideas. Some may want concrete realizations and examples. The real question is, will they go away and truly inculcate it into their teaching practices?

CCL_BY-NC-SA(https://creativecommons.org/licenses/by-nc-sa/4.0/legalcode)

*      *       *

Coupling to the Universe — or “You Are You Because You Are You”

Carlo Rovelli can sure talk up a blizzard (I’m reviewing his conference talk: (The preferred time direction in the dynamics of the full universe). For an Italian native he can really weave a blinding spell in English.

He has my confused when he tries to explain the apparent low entropy Big Bang cosmology. He uses his own brand of relational quantum mechanics I think, but it comes out sounding a bit circular or anthropomorphic. Yet earlier in his lectures he often takes pains to deny anthropomorphic views.

So it is quite perplexing when he tries to explain our perception of an arrow of time by claiming that, “it is what makes us us.” Let me quote him, so you can see for yourself. He starts out by claiming the universe starts in a low entropy state only form our relative point of view. Entropy is an observer dependent concept. It depends on how you coarse grain your physics. OK, I buy that. We couple to the physical external fields in a particular way, and this is what determines how we perceive or coarse grain our slices of the universe. So how we couple to the universe supposedly explains way wee see the apparent entropy we perceive. If by some miracle we coupled more like antiparticles effectively travelling in the reverse time direction then we’d see entropy quite differently, one imagines. So anyway, Rovelli then summarizes:

[On slides: Entropy increase (passage of time) depend on the coarse graining, hence the subsystem, not the microstate of the world.] … “Those depend on the way we couple to the rest of the universe. Why do we couple to the rest of the universe in this way? Because if we didn’t couple to the rest of the universe this way we wouldn’t be us. Us as things, as biological entities that very much live in time coupled in a manner such that the past moves towards the future in a precise sense … which sense? … the one described by the Second Law of Thermodynamics.”

You see what I mean?

Maybe I am unfairly pulling this out of a rushed conference presentation, and to be more balanced and fair I should read his paper instead. If I have time I will. But I think a good idea deserves a clear presentation, not a rush job with a lot of vague wishy-washy babble, or obscuring in a blizzard of words and jargon.

OK, so here’s an abstract from an arxiv paper where Rovelli states things in written English:

” Phenomenological arrows of time can be traced to a past low-entropy state. Does this imply the universe was in an improbable state in the past? I suggest a different possibility: past low-entropy depends on the coarse-graining implicit in our definition of entropy. This, in turn depends on our physical coupling to the rest of the world. I conjecture that any generic motion of a sufficiently rich system satisfies the second law of thermodynamics, in either direction of time, for some choice of macroscopic observables. The low entropy of the past could then be due to the way we couple to the universe (a way needed for us doing what we do), hence to our natural macroscopic variables, rather than to a strange past microstate of the world at large.”

That’s a little more precise, but still no clearer on import. He is still really just giving an anthropocentric argument.

I’ve always thought science was at it’s best when removing the human from the picture. The problem for our universe should not be framed as one of “why do we see an arrow of time?” because, as Rovelli points out, for complex biological systems like ourselves there really is no other alternative. If we did not perceive an arrow of time we would be defined out of existence!

The problem for our universe should be simply, “why did our universe begin (from any arbitrary sentient observer’s point of view) with such low entropy?”

But even that version has the whiff of observer about it. Also, you just define the “beginning” as the end that has the low entropy, then you are done, no debate. So I think there is a more crystalline version of what cosmology should be seeking an explanation for, which is simply, “how can any universe ever get started (from either end of a singularity) in a low entropy state?”

But even there you have a notion of time, which we should remove, since “start” is not a proper concept unless one already is talking about a universe. So the barest question of all perhaps, (at least the barest that I can summon) is, “how do physics universes come to exist?”

This does not even explicitly mention thermodynamics or an arrow of time. But within the question those concepts are embedded. One needs to carefully define “physics” and “physics universes”. But once that is done then you have a slightly better philosophy of physics project.

More hard core physicists however will never stoop to tackle such a question. They will tend to drift towards something where a universe is already posited to exist and has had a Big Bang, and then they will fret and worry about how it could have a low entropy singularity.

It is then tempting to take the cosmic Darwinist route. But although I love the idea, it is another one of those insidious memes that is so alluring but in the cold dead hours of night, when the vampires of popular physics come to devour your life blood seeking converts, seems totally unsatisfying and anaemic. The Many Worlds Interpretation has it’s fangs sunk into a similar vein, which I’ve written about before.

cosmo_OnceUponATimeInRovellisUniverse

*      *       *

Going back to Rovelli’s project, I have this problem for him to ponder. What if there is no way for any life, not even in principle, to couple to the universe other than via the way we humans do, through interaction with strings (or whatever they are) via Hamiltonians and mass-energy? If this is true, and I suspect it is, then is not Rovelli’s “solution” to the low entropy Big Bang a bit meaningless?

I have a pithy way of summarising my critique of Rovelli. I would just point out:

The low entropy past is not caused by us. We are the consequence.

So I think it is a little weak for Rovelli to conjecture that the low entropy past is “due to the way we couple to the universe.” It’s like saying, “I conjecture that before death one has to be born.” Well, … duuuuhhh!

The reason my photo is no longer on Facebook is due to the way I coupled to my camera.

I am an X-gener due to the way my parents coupled to the universe.

You see what I’m getting at? I might be over-reaching into excessive sarcasm, but my point is just that none of this is good science. They are not explanations. It is just story-telling. Still, Rovelli does give an entertaining story if you are a physics geek.

So I had a read of Rovelli’s paper and saw the more precise statement of his conjecture:

Rovelli’s Conjecture: “Any generic microscopic motion of a sufficiently rich system satisfies the second law (in either time direction) for a suitable choice of macroscopic observables.

That’s the sort of conjecture that says nothing. The problem is the “sufficiently rich” clause together with the “suitable choice” clause. You can generate screeds of conjectures with such a pair of clauses. The conjecture only has “teeth” if you define what you mean by “sufficiently rich” and if a “suitable choice” can be identified or motivated as plausible. Because otherwise you are not saying anything useful. For example, “Any sufficiently large molecule will be heavier than a suitably chosen bowling ball.”

*      *       *

Rovelli does provide a toy example to illustrate his notions in classical mechanics. He has yellow balls and red balls. The yellow balls have an attractor which gives them a natural second law of thermodynamic arrow of time. The same box also has red balls with a different attractor which gives them the opposite arrow of time according to the second law. (Watching the conference video for this is better than reading the arxiv paper.) But “so what?”

Rovelli has constructed a toy universe that has entities that would experience opposite time directions if they were conscious. But there are so many things wrong with this example it cannot be seriously considered as a bulwark for Rovelli’s grander project. For starters, what is the nature of his Red and Yellow attractors? If they are going to act complicated enough to imbue the toy universe with anything resembling conscious life then the question of how the arrow of time arises is not answered, it just gets pushed back to the properties of these mysterious Yellow and Red attractors.

And if you have only such a toy universe without any noticeable observers then what is the point of discussing an arrow of time? It is only a concept that a mind external to that world can contemplate. So I do not see the relevance of Rovelli’s toy model for our much more complicated universe which has internal minds that perceive time.

You could say, in principle the toy model tells us there could be conscious observers in our universe who are experiencing life but in the reverse time direction to ourselves, they remember our future but not our past, we remember their future but not their past. Such dual time life forms would find it incredibly hard to communicate, due to this opposite wiring of memory.

But I would argue that Rovelli’s model does not motivate such a possibility, for the same reason as before. Constructing explicit models of different categories of billiard balls each obeying a second law of thermodynamics in opposite time directions in the same system is one thing, but not much can be inferred from this unless you add in a whole lot of further assumptions about what Life is, metabolism, self-replication, and all that. But if you do this the toy model becomes a lot less toy-like and in fact terribly hard to explicitly construct. Maybe Stephen Wolfram’s cellular automata can do the trick? But I doubt it.

I should stop harping on this. Let me just record my profound dissatisfaction with Rovelli’s attempt to demystify the arrow of time.

*      *       *

If you ask me, we are not at a sufficiently mature enough juncture in the history of cosmology and physics to be able to provide a suitable explanation for the arrow of time.

So I have Smith’s Conjecture:

At any sufficiently advanced enough juncture in the history of science, enough knowledge will have accumulated to enable physicists to provide a suitable explanation for the arrow of time.

Facetiousness aside, I really do think that trying to explain the low entropy big bang is a bit premature. It would be much better to be patient and wait for more information about our universe before attempting to launch into the arrow of time project. The reason I believe so is because I think the ultimate answers about such cosmological questions are external to our observable universe.

But even whether they are external or internal there is a wider problem to do with the nature of time and our universe. We do not know if our universe actually had a beginning, a true genesis, or whether it has always existed.

If the universe had a beginning then the arrow of time problem is the usually low entropy puzzle problem. But if the universe had no beginning then the arrow of time problem becomes a totally different question. There is even a kind of intermediate problem that occurs if our universe had a start but within some sort of wider meta-cosmos. Then the problem is much harder, that of figuring out the laws of this putative metaverse. Imagine the hair-pulling of cosmologists who discover this latter possibility as a fact about their universe (but I would envy them the shear ability to discover the fact, it’d be amazing).

So until we know such a fundamental question I do not see a lot of fruitfulness in pursuing the arrow of time puzzle. It’s a counting your chickens before they hatch situation. Or should I say, counting your microstates before they batch.

*      *       *


CCL_BY-NC-SA(https://creativecommons.org/licenses/by-nc-sa/4.0/legalcode)

Primacks’ Premium Simulations

After spending a week debating with myself about various Many Worlds philosophy issues  and other quantum cosmology questions, today I saw Joel Primack’s presentation at the Philosophy of Cosmology International Conference, on the topic of Cosmological Structure Formation. And so for a change I was speechless.

Thus I doubt I can write much that illumines Primack’s talk better than if I tell you just to go and watch it.

He, and colleagues, have run supercomputer simulations of gravitating dark matter in our universe. From their public website Bolshoi Cosmological Simulations they note: “The simulations took 6 million cpu hours to run on the Pleiades supercomputer — recently ranked as seventh fastest of the world’s top 500 supercomputers — at NASA Ames Research Center.”

To get straight to all the videos from the Bolshoi simulation go here (hipacc.ucsc.edu/Bolshoi/Movies.html).

cosmos_BolshoiSim_MD_cluster01_gas_sn320

MD4 Gas density distribution of the most massive galaxy cluster (cluster 001) in a high resolution resimulation, x-y-projection. (Kristin Riebe, from the Bolshoi Cosmological Simulations.)

The filamentous structure formation is awesome to behold. At times they look like living cellular structures in the movies that Primack has produced. Only the time steps in his simulations are probably about 1 million year steps. for example, on simulation is called the Bolshio-Planck Cosmological Simulation — Merger Tree of a Large Halo. If I am reading this page correctly these simulations visualize 10 billion Sun sized halos.  The unit they say they resolve is “1010 Msun halos”. Astronomers will often use a symbol M to represent a unit of one solar mass (equal to our Sun’s mass). But I have never seen that unit “M halo” used before, so I’m just guessing it means the finest structure resolvable in their movie still images would be maybe a Sun-sized object, or a solar system sized bunch of stuff. This is dark matter they are visualizing, so the stars and planets we can see just get completely obscured in these simulations (since the star-like matter is less than a few percent of the mass).

True to my word, that’s all I will write for now about this piece of beauty. I need to get my speech back.

*      *       *

Oh, but I do just want to hasten to say the image above I pasted in there is NOTHING compared to the movies of the simulations. You gotta watch the Bolshoi Cosmology movies to see the beauty!

*       *       *


CCL_BY-NC-SA(https://creativecommons.org/licenses/by-nc-sa/4.0/legalcode)

Oil Pilots and Many World Probability

Continuing my ad hoc review of Cosmology and Quantum Foundations, I come to Max Tegmark and Simon Saunders, who were the two main champions of Many Worlds Interpretations present at this conference. But before discussing ideas arising from their talks, I want to mention an addendum to the Hidden Variables and de Broglie-Bohm pilot wave theory that I totally coincidentally came across the night after writing the previous post (“Gaddamit! Where’d You Put My Variables”).

Fluid Dynamics and Oil Droplets Model de Broglie-Bohm Pilot Waves

This is some seriously recent and immature research, but it is fascinating. And really simple to explain so it’s cool. Here’s the link: Fluid Tests Hint at Concrete Quantum Reality.

quanta_OilDropletPilotwaves

Oil droplets surfing ripples on a fluid surface exhibit two-slit interference. Actually not! They follow chaotic trajectories that reproduce interference patterns only statistically, but there is no superposition at all for the oil droplet, only for the wave ripples. Remarkably similar qualitatively to de Broglie-Bohm pilot wave theory.

You delicately place oil droplets on an immiscible fluid surface (water I suppose) and the droplets bounce around creating waves in the fluid surface. Then, lo and behold! Send an oil droplet through a double slit barrier and it goes through one slit right! Shocking! But then hold on to your skull … after traversing the slit the oil droplet then chaotically meanders around surfing on the wave ripples spreading out from the double slit that the oil droplet was actually responsible for generating before it got to the slits.

Do this for many oil droplets and you will see the famous statistical build-up of interference pattern at a distance radius, but here with classical oil droplets that can be observed to smithereens without destroying superposition of the fluid waves, so you get purely classical double slit interference. Just like the de Broglie-Bohm pilot wave theory predicts for the Bohmian mechanics view of quantum mechanics. I say, “jut like” because clearly this is macroscopic in scale and the mechanism of pilot waves is totally different to the quantum regime. Nonetheless, it is a clear condensed matter physics model for pilot wave Bohmian quantum mechanics.

(There is a recent decades trend in condensed matter physics where phenomenon qualitatively similar to quantum mechanics or black hole phenomenology, or even string theory, can be modelled in solid state or condensed matter systems. It’s a fascinating thing. No one really has an explanation for such quasi-universality in physics. I guess, when different systems of underlying equations give similar asymptotic behaviour then you have a chance of observing such universality in disparate and seemingly unrelated physical systems. One example Susskind mentions in his theoretical Minimum lectures is the condensed matter systems that model Majorana fermions. It’s just brilliantly fascinating stuff. I was going to write separate article about this. Maybe later. I’ll just mention that although such condensed matter models have to be taken with a grain of salt, to whatever extent they can recapitulate the physics of quantum systems you have this tantalising possibility of being able to construct low energy desktop experiments that might, might, be able to explore extreme physics such as superstring regimes and black hole phenomenology, only with safe and relatively affordable experiments. I’m no futurist, but as protein biology promises to be the biology of the 21st century, maybe condensed matter physics is poised to take over from particle accelerators as the main physics laboratory for the 1st century? It’d be kinda’ cool wouldn’t it?)

The oil droplet experiments are not a perfect model for Bohmian mechanics since these pilot waves do not carry other quantum degrees of freedom like spin or charge.

Normally I would scoff at this and say, “nice, but so what?” Physics, and science in general, is rife with examples of disparate systems that display similarity or universality. It does not mean the fundamental physics is the same. And in the oil droplet pilot wave experiments we clearly have a hell of a lot of quantum mechanics phenomenology absent.

But I did not scoff at this one.

The awesome thing about this oil droplet interference experiment is that there is a clear mechanism that can recapitulate a lot of the same phenomenology at the Planck scale, and hence offers an intriguing and tantalising alternative explanation for quantum mechanics as an effective theory that emerges from a more fundamental of Plank scale spacetime dynamics (geometrodynamics to borrow the terminology of Wheeler and Misner). Hell, I will not even mention “quantum gravity”, since that’d take me too fa afield, but dropping that phrase in here is entirely appropriate.

The clear Planck scale phenomenology I am speaking of is the model of spacetime as a superfluid. It will support non-dissipative pilot waves, which are therefore nothing less than subatomic gravitational waves of a sort. Given the weakness of gravity you can imagine how fragile are the superpositions of these spacetime or gravitational pilot waves. Not hard to destroy coherent states.

Then, of course, we already have the emerging theory of ER=EPR which explains entanglement using a type of geometrodynamics. If you start to package together everything that you can get out of geometrodynamics then you being to see a jigsaw puzzle filling in that hints maybe the whole gamut of quantum physics phenomenology at the Planck scale can be largely adequately explained using spacetime geometry and topology.

One big gap in geometrodynamics is the phenomenology of particle physics. Gauge symmetries, charges, and the rest. It will take a brave and fortified physicist to tackle all these problems. If you read my blog you will realise I am a total fan of such approaches. Even if they are wrong, I think they are huge fun to contemplate and play with, even if only as mathematical diversions. So I encourage any young mathematically talented physicists to dare to go in to active research on geometrodynamics.

The Many Worlds Guys

So what about Tegmark and Saunders? Well, by this point I kind of exhausted myself today and forgot what I was going to write about. Saunders mentioned something about frequentist probability having serious issues and that Frequentism could not be a philosophical basis for probability theory. I think that’s a bit unfair. Frequentism works in many practical cases. I don’t think it has to be an over-arching theory of probability. It works when it works.

Same in lots of science. Fourier transforms work on periodic signals, and FT’s can compress non-periodic signals too, but not perfectly. Newtonian physics works bloody well in many circumstances, but is not an all-encompassing theory of mechanics. Natural selection works to explain variation and speciation in living systems, but it is not the whole story, it cannot happen without some supporting mechanism like DNA replication and protein synthesis. You cannot explain speciation using Natural selection alone, it’s just not possible, Natural selection is too general and weak to be a full explanatory theory.

It’s funny too. Saunders seems to undermine a lot of what Tegmark was trying to argue in the previous talk at the conference. Tegmark was explicitly using frequentist counting in his arguments that Copenhagen is no better or worse than Many Worlds from a probabilistic perspective. I admit I do not really know what Saunders was on about. If you can engineer a proper measure than you can do probability. I think maybe Tegmark can justify some sort of MWI space measures. Again, I do not really know much about measure theory for MWI space. Maybe it is an open problem and Tegmark is stretching credibility a bit?

*      *       *


CCL_BY-NC-SA(https://creativecommons.org/licenses/by-nc-sa/4.0/legalcode)

var MyStupidStr = “Gadammit! Where’d You Put My Variables!?”;

This WordPress blog keeps morphing from Superheros and SciFi back to philosophy of physics and other topics. So sorry to readers expecting some sort of consistency. This week I’m back with the Oxford University series, Cosmology and Quantum Foundations lectures. Anthony Valentini gives a talk about Hidden Variables in Cosmology.

The basic idea Valentini proposes is that we could be living in a deterministic cosmos, but we are somehow trapped in a region of phase space where quantum indeterminism reigns. In the our present epoch region there are hidden variables but they cannot be observed, not even indirectly, so they have no observable consequences, and so Bell’s Theorem and Kochen-Specker and the rest of the “no-go” theorems associated with quantum logic hold true. Fine, you say, then really you’re saying there effectively are no Hidden Variables (HV) theories that describe our reality? No, says Valetini. The Hidden Variables would be observable if the universe was in a different state, the other phase. How might this happen? And what are the consequences? And is this even remotely plausible?

Last question first: Valentini thinks it is testable using the microwave cosmic background radiation. Which I am highly sceptical about. But more on this later.

cosmol_Valentin_all.dof.have.relaxed

The idea of non-equilibrium Hidden Variable theory in cosmology. The early universe violates the Born Rule and hidden variables are not hidden. But the violent history of the universe has erased all pilot wave details and so now we only see non-local hidden variables which is no different from conventional QM. (Apologies for low res image, it was a screenshot.)

How Does it Work?

How it might have happened is that the universe as a whole might have two (at least, maybe more) sorts of regimes, one of which is highly non-equilibrium, extremely low entropy. In this region or phase the Hidden Variables would be apparent and Bell’s Theorems would be violated. In the other type of phase the universe is in equilibrium, high entropy, and Hidden Variables cannot be detected and Bell’s Theorem’s remain true (for QM). Valentini claims early during the Big Bang the universe may have been in the non-equilibrium phase, and so some remnants of this HV physics should exist in the primordial CMB radiation. But you cannot just say this and get hidden variables to be unhidden. There has to be some plausible mechanism behind the phase transition or the “relaxation” process as Valentini describes it.

The idea being that the truly fundamental physics of our universe is not fully observable because the universe has relaxed from non-equilibrium to equilibrium. The statistics in the equilibrium phase get all messed up and HV’s cannot be seen. (You understand that in the hypothetical non-equilibrium phase the HV’s are no longer hidden, they’d be manifest ordinary variables.)

Further Details from de Broglie-Bohm Pilot Wave Theory

Perhaps the most respectable HV theory is the (more or less original) de Broglie-Bohm pilot wave theory. It treats Schrödinger’s wave function as a real potential in a configuration space which somehow guides particles along deterministic trajectories. Sometimes people postulate Schrödinger time evolution plus an additional pilot wave potential. (I’m a bit vague about it since it’s a long time since I read any pilot wave theory.) But to explain all manner of EPR experiments you have to go to extremes and imagine this putative pilot Wave as really an all-pervading information storage device. It has to guide not only trajectories but also orientations of spin and units of electric charge and so forth, basically any quantity that can get entangled between relativistically isolated systems.

This seems like unnecessary ontology to me. Be that as it may, the Valentini proposal is cute and something worth playing around with I think.

So anyway, Valentini shows that if there is indeed an equilibrium ensemble of states for the universe then details of particle trajectories cannot be observed and so the pilot wave is essentially unobservable, and hence a non-local HV theory applies which is compatible with QM and the Bell inequalities.

It’s a neat idea.

My bet would be that more conventional spacetime physics which uses non-trivial topology can do a better job of explaining non-locality than the pilot wave. In particular, I suspect requiring a pilot wave to carry all relevant information about all observables is just too much ontological baggage. Like a lot of speculative physics thought up to try to solve foundational problems, I think the pilot wave is a nice explanatory construct, but it is still a construct, and I think something still more fundamental and elementary can be found to yield the same physics without so many ad hoc assumptions.

To relate this with very different ideas, what the de Broglie-Bohm pilot wave reminds me of is the inflaton field postulated in inflationary Big Bang models. I think the inflaton is a fictional construct. Yet it’s predictive power has been very successful.   My understanding is that instead of an inflaton field you can use fairly conventional and uncontroversial physics to explain inflationary cosmology, for example the Penrose CCC (Conformal Cyclic Cosmology) idea. This is not popular. But it is conservative physics and requires no new assumptions. As far as I can tell CCC “only” requires a long but finite lifetime for electrons, which should eventually decay by very weak processes.  (If I recall correctly,  in the Standard Model the electron does not decay.)  The Borexino experiment in Italy has measured the lower limit on the electron lifetime as longer than 66,000—yottayears, but currently there is no upper limit.

And for the de Broglie-Bohm pilot wave I think the idea can be replaced by spacetime with non-trivial topology, which again is not very trendy or politically correct physics, but it is conservative and conventional and requires no drastic new assumptions.

What Are the Consequences?

I’m not sure what the consequences of cosmic HV’s are for current physics. The main consequence seems to be an altered understanding of the early universe, but nothing dramatic for our current and future condition. In other words, I do not think there is much use for cosmic HV theory.

Philosophically I think there is some importance, since the truth of cosmic HV’s could fill in a lot of gaps in our civilisations understanding of quantum mechanics. It might not be practically useful, but it would be intellectually very satisfying.

Is Their Any Evidence for these Cosmic HV’s?

According to Valentini, supposing at some time in the early Big Bang there was non-equilibrium, hence classical physics more or less, then there should be classical perturbations frozen in the cosmic microwave radiation background from this period. This is due to a well-known result in astrophysics where perturbations on so-called “super Hubble” length scales tend to be frozen — i.e., they will still exist in the CMB.

Technically what Valentini et al., predict is a low-power anomaly at large angles in the spectrum of the CMB. That’s fine and good, but (contrary to what Valentini might hope) it is not evidence of non-equilibrium quantum mechanics with pilot waves. Why not? Simply because a hell of a lot of other things can account for observed low-power anomalies. Still, it’s not all bad — any such evidence would count as Bayesian inference support for pilot wave theory. Such weak evidence abounds in science, and would not count as a major breakthrough, unfortunately (because who doesn’t enjoy a good breakthrough?) I’m sure researchers like Valentini, in any sciences, in such positions of lacking solid evidence for a theory will admit behind closed doors the desultory status of such evidence, but they do not often advertise it as such.

It seems to me so many things can be “explained” by statistical features in the CMB data. I think a lot of theorist might be conveniently ignoring the uncertainties in the CMB data. You cannot just take this data raw and look for patterns and correlations and then claim they support your pet theory. At a minimum you need to use the uncertainties in the CMB data and allow for the fact that your theory is not truly supported by the CMB when alternatives to your pet theory are also compatible with the CMB.

I cannot prove it, but I suspect a lot of researchers are using the CMB data in this way. That is, they can get the correlations they need to support their favourite theory, but if they include uncertainties then the same data would support no correlations. So you get a null inconclusive result overall. I do not believe in HV theories, but I do sincerely wish Valentini well in his search for hard evidence. Getting good support for non-mainstream theories in physics is damn exciting.

*      *       *

Epilogue — Why HV? Why not MWI? Why not …

At the same conference Max Texmark polls the audience on their favoured interpretations of QM. The very fact people can conduct such polls among smart people is evidence of areal science of scientific anthropology. It’s interesting, right?! The most popular was Undecided=24. Many Worlds=15. Copenhagen=2. Modified dynamics (GRW)=0. Consistent Histories=0. Bohm (HV)=5. Relational=2. Modal=0.

This made me pretty happy. To me, undecidability is the only respectable position one can take at this present juncture in the history of physics. I do understand of course that many physicists are just voting for their favourites. Hardly any would stake their life on the fact that their view is correct. still, it was heart-warming to see so many taking the sane option seriously.

I will sign off for now by noting a similarity between HV and MWI. There’s not really all that much they have in common. But they both ask us to accept some realities well beyond what conservative standard interpretation-free quantum mechanics begs. What I mean by interpretation-free is just minimalism, which in turn is simply whatever modeling you need to actually do quantum mechanics predictions for experiments, that is the minimal stuff you need to explain or account for in any metaphysics interpretations sitting on top of QM. There is, of course, no such interpretation, which is why I can call it interpretation-free. You just go around supposing (or actually not “supposing” but merely “admitting the possibility”) the universe IS this Hilbert space and that our reality IS a cloud of vectors in this space that periodically expands and contracts in consistency with observed measurement data and unitary evolution, so that it all hangs together consistently and a consistent story can be told about the evolution of vectors in this state space that we take as representing our (possibly shared) reality (no need for solipsism).

I will say one nice thing about MWI: it is a clean theory! It requires a hell of a lot more ontology, but in some sense nothing new is added either. The writer who most convinces me I could believe in MWI is David Deutsch. Perhaps logically his ideas are the most coherent. But what holds me back and forces me to be continually agnostic for now (and yes, interpretations of QM debates are a bit quasi-religious, in the bad meaning of religious, not the good) is that I still think people simply have not explored enough normal physics to be able to unequivocally rule out a very ordinary explanation for quantum logic in our universe.

I guess there is something about being human that desires an interpretation more than this minimalism. I am certainly prey to this desire. But I cannot force myself to swallow either HV(Bohm) or MWI. They ask me to accept more ontology than I am prepared to admit into my mind space for now. I do prefer to seek a minimalist leaning theory, but not wholly interpretation-free. Not for the sake of minimalism, but because I think there is some beauty in minimalism akin to the mathematical idea of a Proof from the Book.


CCL_BY-NC-SA(https://creativecommons.org/licenses/by-nc-sa/4.0/legalcode)

Collapsitude of the Physicists

The series of Oxford lectures I am enjoying in my lunch hours is prompting a few blog ideas. The latest is the business of the collapse of the wavefunction. So much has been written about the measurement problem in quantum mechanics that I surely do not need to write a boring introduction to it all. So I will just assume you can jump in cold and use Wikipedia or whatever to warm up when needed.

quanta_decoherance_awyoumademecollapse

There were no simple cartoons capturing the essence of the “collapse of the wavefunction”, so I made up this one.

By the way, the idea behind my little cartoon there is that making a measurement need not catastrophically collapse a system into a definite eigenstate as most textbooks routinely describe.  This (non-total collapse) is depicted as the residual pale pink region which entertains states in phase space that still have finite probability amplitudes.  We never really notice them subsequently because the amplitudes for these regions are too darn small to detect in any feasible future measurements.   Every measurement has finite precision, so you cannot completely use an actual real messy brains and wheetbix and jelly experiments to form a pure state.  Textbooks on QM are like this, they take so many liberties with the reality of an experimental situation that the theoreticians tend to lose touch with reality, especially when indulging in philosophy while calling it physics.

The issue is rife in many lectures I am watching, one is Simon Saunders’ talk on “The Case for Many Worlds“. He poses a sequence of questions for his audience:

  • Why does the collapse of the state happen?
  • When does it happen?
  • To what state does the state collapse?

He presages this by polling his audience on whether they believe the proverbial Schrödinger’s Cat is exclusively either alive or dead before the observer looks inside the diabolical box with the vial of radioactively triggered nerve gas. Some half of the audience believe the Cat was either alive or dead (i.e., not in a superposition). He then asks what about if the box was not an isolated box but a broom cupboard? Not many people change their mind! But the point was that the cupboard is, surely, in no way or form now isolated from the external universe, and surely has enough perturbations to destroy any delicate entangled superposed states.  But I guess some lovers of cats are hard to budge.

Then he asks, “well, what about if the experiment is being done up on a branch of a tree in a forest with no observer anywhere around?” (The clear unspoken implication is to think about the proverbial tree falling …). He cites a quantum information theoretic conference audience 80% of whom believed the Cat would then still not be in an exclusive XOR state, i.e., would still be in a superposition. Which is quite remarkable. Maybe they never heard of the phenomenon of environmental decoherence?

It’s at such times I wonder if Murray Gell-Mann has had an unhealthy influence on how physicists take their philosophy? In much of his popular style writing Gell-Mann has argued for environmental decoherence. The idea though is that there is no collapse, not ever. The universe remains mixed in a superposition of a giant cosmic wavefunctional state. Gell-Mann is not the sole culprit of course, but he’s the head honcho by fame if nothing else. And boy! You don’t want to go up head-to-head arguing against Gell-Mann! You’ll get your ears pulverised by pressure waves of unrelenting egg-headedness.

To be fair and balanced here’s a book cover that looks like it would be a juicy read if you really want to tangle with environmental decoherence as the explanation for classical physics appearances.

quanta_Joos_bookcover_EnvironmentalDecoherance

Looks like a good read. the lead author is Erich Joos.

I just want to warn you, if you ever feel like you are in a superposition of states then there are some medications that can recover classical physics if you find it too nauseous.

You do not have to take wavefunctions literally. They are just computational devices. The mathematical tool used to model quantum mechanics is not the thing itself that we are trying to describe and model. The point is, the idea is, that whatever the universe is, it must be described by a wavefunction or some equivalent modelling that enjoys a superposition of classical states. That’s what makes the world quantum mechanical, there are classical-like states that get all tensored up in a superposition of some form, and whether you choose to describe this mathematically by a wavefunction, or by matrices, or by Dirac bra and ket vectors in a Hilbert space is largely immaterial.

Many Worlds theorists have a fairly similar outlook to the decoherence folks. Although at some root level their interpretations differ, or are even perhaps empirically incompatible in principle (I’m not sure about that?) I think both views have the germ of the idea that there really is no collapse of state space. In environmental decoherence a measurement merely entangles the system with more stuff, and so gazillions of new things are now entangled, and the whole lot only appears to behave more classically as a result. But there is still superposition, only so many of the coefficients in the linear superposition have shrunk to near zero the overall effect is classical-like collapse. Then of course Schrodinger evolution picks up after the measurement is done, and isolation can gradually be reestablished around some experiment, … and so on and so forth.

Here’s my penny take on this. I’ve become a firm proponent of ER=EPR.  So I figure entanglement is as near to wormhole formation as you wanna get. You can take this literally or merely computationally convenient. For the time being I’m a literalist on it, which means I’ll change my mind if evidence mounts contrarily, but I think it is fruitful to take ER=EPR at more or less face value to see where it leads us.

ERrrr … who just collapsed me? You fiend!

I am also favouring something like gravitational collapse ideas. These seem to have a lot of promise, including (and this is a big selling point for me) the possibility of a link with ER=EPR. For one: if entanglement is established via ER bridges, then probably collapse of superposition can be effected by break-up of the wormholes. It seems a no-brainer to me. Technical issues aside. There might be some bugger of mathematical devilishness that renders this all nonsense. But I’m in like with the ideas and the connections.

Ergo I do not subscribe to environmental decoherence and the eternal superposition of the cosmos. Ergo again I do not subscribe to Many Worlds interpretations. Not that physics foundations is about popularity contests. But I think, when/if experimental approaches to these questions become possible I would be wanting to put research money into rigorously testing gravitational collapse and (if you deign to be a bit simplistic) also ER=Superposition, and therefore “NoER=Collapse”.

Well, that’s a smidgen of my thoughts for now on record. I think there are so many vast unexplored riches in fundamental theories and ideas of spacetime and particle physics that we do not yet need to reach out to bizarre outlandish interpretations of quantum mechanics. Bohr was the original sinner here. But pretty much every physicist who has dabbled in metaphysics and sought a valid interpretation of quantum mechanics has collapsed to the siren of the absurd ever since. This includes all those who followed Feynman’s dictum of forget about an interpretation. I think such non-interpretations are just as silly as the others.

Actually I’m not sure why I’ve characterised this as Feynman’s dictum. To be fair he did not say anything so extreme. He just marvelled at nature and warned physicists not to get into the mode of trying to tell nature what to do:

“We are not to tell nature what she’s gotta be. … She’s always got better imagination than we have.”

— R.P. Feynman, in the Sir Douglas Robb Lectures, University of Auckland (1979).

Man, I LOVED those lectures. My high school physics teacher John Hannah exposed our class to Feynman. Those were some of the best days of my life. The opening up of the beauty of the universe to my inner eyes. Here’s another favourite passage from those lectures:

“There’s a kind of saying that you don’t understand its meaning, ‘I don’t believe it. It’s too crazy. I’m not going to accept it.’ … You’ll have to accept it. It’s the way nature works. If you want to know how nature works, we looked at it, carefully. Looking at it, that’s the way it looks. You don’t like it? Go somewhere else, to another universe where the rules are simpler, philosophically more pleasing, more psychologically easy. I can’t help it, okay? If I’m going to tell you honestly what the world looks like to the human beings who have struggled as hard as they can to understand it, I can only tell you what it looks like.”
— R.P. Feynman (b.1918–d.1988).

Feynman actually said, “It’s the woy nature woiks.”

*      *       *


CCL_BY-NC-SA(https://creativecommons.org/licenses/by-nc-sa/4.0/legalcode)