Reasoning to the Extreme, or Descartes’ Better Dictum

Reason is not the opposite of spirituality.  Reason is the opposite of folly and ignorance coupled with prejudice and superstition. In other words, in moral and spiritual language reason is a good. People often fail to appreciate this (all the atheists who rant about how spirituality is an illusion, or that it can be based on science alone). Human reasoning is, of course, imperfect, so one cannot automatically and mechanically reason one’s way by logic and empirical science towards truth and morality (although some are trying, the atheists again, with some successes, and with noble motives for the most part, I applaud their efforts).  Although, if the militant atheists are trying to derive morality from evolutionary principles in order to exorcise religion from society, then I think they do not have the noblest motives at heart, because such attempts ignore the slim possibility that religion was never bad, it just gets corrupted over time by ordinary humans.   I think anyone with a fair and open mind will realise that the origins of most major religions were quite pure and good, you just have to read past all the fire and brimstone decorations and see through to the essence of the original teachings, which invariably contain both universal ideals and social teachings that were only relevant to the time and age they were revealed.  However, that’s not my focus for today.

My topic for this post sounds somewhat alarming, but bear with me, I hope to even convince myself of this by the end (although I am initially sceptical that I can). What I hope to achieve is a convincing argument that Reasoning which approaches perfection is a spiritual virtue, a human good, in fact a universal good, and that if sound and judicious reasoning is taken to the extreme we arrive at a spiritual state of truth, beauty, justice, wisdom, compassion and kindness. You can consider a very short version of this thesis being: a perfect reasoner (even without omnipotent foresight) will in general evolve towards a state of perfect honesty. Then once perfect honesty is admitted, the other spiritual attributes will almost inevitably follow.

Thesis of Ultra-Rationality

The thesis can be stated succinctly: “An ultra-rationalist eventually becomes spiritually minded.”

Being Spiritually Minded

I know there is a colloquial use of the word “spirit” which connotes some kind of ethereal substance, like a ghost or a fairy. This is absolutely not what I mean by the word spirit. Just want to make that perfectly clear.

For me spirit is not a substance. It is an abstracta, a state of mind, a condition of thought. Yet something must exist in order to have subjective thoughts, like a brain. Brains are fairly concrete substances, I think you’d agree. And yet the human spirit shines through the brain somehow, abstract thoughts crystallize into concrete reality through the intermediary between our brains and the world of ideas. What is “the world of ideas”? No one knows. But we all seem to have conscious access to abstract ideas, like perfect circles, transcendental numbers, the eternal quality of truth and justice. Some people call the realm of ideas the Platonic realm, but they cannot tell us what it is exactly. Some refer to it as the Mindscape or sometimes Mindspace. But these are just names. You can name anything to pretend it is real, but that does not make it real. However, I do believe there is something very useful and possibly “True” about the concept of an abstract realm of ideas, and I certainly think there is a lot of practical (and theoretical) use for a closely related, more restricted, notion of a mathematical platonic realm. I like the phrase “Mindscape” because it helps to remind me not to assume it is a geometric space like spacetime (although maybe it is? In an abstract mathematical sense every set of relations between identifiable “things” is some kind of geometric space, at some level). For me, the Mindscape includes the mathematical platonic realm.

OK, so we seem to need some substrate (some kind of substance, be it physical or otherwise) in order to metaphorically “put fire into the equations”, in other words, to translate spirit into concrete thought, action, behaviours. In our particular physical world there are hard scientific findings that are narrowing in on how conscious thought operates, which suggest the brain (neural activity) is not the complete story. The science is very young, but I suspect over the next decades or centuries science will be able to reveal a lot more about what consciousness is not, meaning that I think we will find consciousness is not a deterministically driven physical process, but instead must irrevocably involve a subtle and complex feedback that traverses time and space.  There are thus many subtleties about human consciousness and human spiritual ideals that science is far from understanding.  But whatever we eventually find, I think it will turn out to be obvious to future scientists that human spirituality is not completely derived from physical principles, and that there really is some kind of connection between brain states and the abstract realm of ideas that I am here referring to as the Mindscape.  The nature of this connection is, at the present time, quite mysterious and unfathomable, not only to scientists, but to pretty much everyone!  If mystics and dreamers had a good grasp of the way humans perceive universal truths and concepts like mathematical abstractions and spiritual abstractions, then they should be able to tell s.  The fact they cannot tell us about these things is, to me, proof they really have no clue.

One cannot easily hide behind such excuses as, “well, I actually do understand these mysteries of yours, but I do not have the words to describe them to you.”  To me that sort of evasion is just disingenuous or delusional thinking.  Although, I will concede the possibility a rare and talented individual will have such penetrating insight into the mysteries of mind and consciousness that cannot be put into words.  I am just sceptical that people who claim such insights are actually those rare gems of wisdom.  And I think even if the cannot put their ideas into words, they should have the capacity to explain a few of the larger principles in metaphorical or allegorical terms that we can begin to grasp.   (I think you can often just tell when someone is delusional, I do not have an algorithm or chemical test for it, but if someone approaches you and starts explaining their theory of consciousness to you, it should only take a minute or to to decide if they are for real insightful or just full of fanciful nonsense.)

Above I wrote, “For me spirit is not substance”, but that’s not just my view.  I also have a few like-minded friends who are hard-nosed scientists and yet who also think there is more to the human condition than mere physical biology. These are people who like the oft-cited contemporary philosopher David Chalmers, “take consciousness seriously”. By this he means we do not lightly dismiss consciousness as a bunch of illusions played upon the brain by the brain. We seek to answer or understand why subjective phenomenal experiences can exist in a world that science describes in purely objective terms (the “redness of red”, the searing pain of a knife cut dosed with iodine, the “pain of loss”, the intoxication of the experienced smell of coffee, all variety of mental qualia).

What I ask you to consider, to take very seriously, is the idea that while the brain definitely represents the patterns of our thoughts, the brains activities are not the reality of our subjective thought, there is still something more to human thought that we have no physical basis for, and this is our access to the eternal realm of ideas, the Mindscape.  A rough (imprecise and sometimes flawed) analogy is with computer hardware and software: a computer’s logic circuit activity is not the reality of it’s software, the logical functioning of a computer is rather a sign, an evidence, that there is software, it is not the software itself.  So it is, I believe, with the brain (analogous to computer) and the mind (analogous to software).

A nice question to ponder is if this analogy can be extended just a little further, one might ask what is the analogue to programming code for the human mind?  No one knows, or even comprehends the full nature of such a question.  But in very broad terms I think there is an answer in the Mindscape.  Our mind seems to have automatic effortless access to the Mindscape, it is how we see the phenomenal “redness” of red coloured objects, it is how we feel the burning fire of guilt and shame when we know we have done something universally wrong or evil.  To be sure the brain represents these abstracta in concrete form, the flood of hormones, adrenalin, cortisol, and such, associated with guilt, or the flood of dopamine and serotonin associated with realising one has done good or received pleasure.  Pleasure is an abstract notion, but the brain has evolved to give our physical self a concrete manifestation of the “feel” of this abstracta.  It is a remarkable phenomenon, this close association between physiology and abstract ideas.  On Earth it appears to be a unique human trait.  The connection between brain physiology and spiritual abstracta can however be easily broken.  This happens in psychopaths and unfortunate victims of severe brain injury or from side-effects of brain surgery.  There seem to be specific regions in our brains that interpret the patterns of our mind’s thoughts and if those regions get damaged we may still acknowledge the logical relations involved in our actions and their moral and ethical consequences, we might even still hold in our mind the connections between the spiritual virtues and concrete actions, but we lose the translation of our feelings into physiological responses, like the aforementioned hormonal surges.  We say, in such cases, people lose the capacity for certain emotions or empathy.

What I will attempt to convince myself of, as a corollary of the Ultra-Rationalist Thesis, is the idea that even such psychologically damaged people can, with concerted effort, find ways to become spiritually aware, or regain a form of spiritual sensitivity after having lost it.  And if some of the recent brain-plasticity research findings are true, I think it might even be possible, through reason, to recover states of phenomenal awareness by re-training the brain to re-represent the feelings and emotions that were once lost, through neural “re-wiring.  That is a big “if“, but I see no reason it is completely impossible.  It just might take extraordinary efforts.  (One must also bear in mind that when someone says “may take extraordinary efforts” they mean that it could be difficult to impossible.)

It is within the Mindscape one can find all the notions of spiritual ideals: these are things like the virtues of love, honesty, truthfulness, wisdom, compassion, courage, kindness, mercy, justice, forgiveness, compassion, and so forth. They have many names these spiritual attributes, but they are in a broader sense all aspects of a One — which is to say, they are all different facets of an abstract sphere within the Mindscape, a sphere which is hard to define, not a geometric sphere, but an abstract region or cloud of ideals which most philosophers of metaphysics might refer to as “the spiritual virtues”. They are not “human virtues”, they are universal virtues, goodnesses that transcend species and universes.  They are cosmic in scope, applying to all things to do with thinking rational minds.

If a mind is not rational then the comprehension and implementation of spiritual virtues becomes confused, corrupted and meaningless.  This is the first heuristic reason why rationality is more closely associated with spirit than most people might think.

No Ordinary Rationality

For my thesis it is necessary to get past the idea that morality can be approached through ordinary rationality.  My suspicion is that such fancies are practical impossibilities, because ordinary human rationality is not pristine and perfect, it is clouded by emotion and desire and attachments to the material world, attachment to excesses of pleasure, possessions, attachment to sexual appetite as opposed to genuine love, and other base cravings.  It’s not the all of these attachments are bad things, in fact some of them are great, after all, what’s wrong with indulging in pleasure and sex and the like?  Nothing.  But it is the secondary or unconscious impulses associated with such cravings and desires that clouds true rationality.  But that’s ok, that’s what makes us all human and interesting, and all a little bit crazy.

The militant atheists have devised a scientific approach to morality under the rubric of Flourishing.  They say human flourishing can be more or less objectively defined, and morality can be derived from this starting point.  They are, I think, only half right about this project.  It is a good project, but it is fundamentally lacking an appreciation of why or how human consciousness subjectively can be aware of the eternal abstracta, the qualities I refer to as spiritual attributes.  Spiritual attributes are, in my view, a different type or category of mental qualia.  They are not as raw and immediate quale as things like the “redness of red” and the “sting of pain”, for such raw quale are about the physical world, they are not about anything abstract.  Qualia associated with pure abstractions have a different sort of ontology.  There is no 650 nanometre wavelength of light associated with the conscious understanding of the spiritual meaning of abstract concepts like the qualia of truth, justice,  kindness or honesty.

So while I think science can meaningfully contribute to some aspects of morality, it is not the whole story, and never will be, since by definition science is a never-ending pursuit of truth.  You never know in science when you’ve hit the big TRUTH, the absolute.    This is because in science all theory is subject to revision conditional upon the reception of new empirical data.  And by the way, if you think science is nevertheless the only (or the best) approach to morality we have going, then you should think again.  Even if there is no attainable absolute Truth about matters of morality and flourishing, there is always an abstract idea of a limit to how far science can take us, and if you take the scientific approach to morality and extend it to an infinite limit, then you have at least a theoretical absolute.  This sort of infinite limit process is something mathematicians are thoroughly familiar with in the field of number theory and set theory. Many pragmatic mathematicians would deny that infinite numbers have any relevance to the real world, but few would deny that as idealization, infinite numbers are perfectly well defined and can be thought of as real in an abstract platonic sense.  It is in a similar or analogous sense that I think absolute Truth and the corresponding absolute limits of all other spiritual attributes, Love, Honesty, Justice, and so on, all have a reality apart from, and independent of, physical reality and physical science.

To be clear: this is not to say that a science of human flourishing is ill-founded.  Scientific basis for human flourishing is on the contrary, a conveniently culturally neutral and logically valid way that we can rationally approach the absolutes of virtue and morality.  I just think the atheists (myself included a few decades ago when I was young and naïve and bullish about science) should not be fooling themselves that such an approach is perfect.  There might not be anything left over after cultural filtering perhaps, in which case even science would have no basis for moral universals.  But I seriously doubt that will ever be the case.

Cultural Relativism

It is also worth mentioning here the problem that a person’s sense of morality can lead to different decisions and outlook depending upon the culture in which they are embedded.  This leads to notions of cultural relativism, which are no doubt tricky for internati and modalityonal law and cross-cultural relations, but they are not the concern of ultra-rationality or scientific flourishing approaches.  The whole idea of ultra-rationality and scientific approaches to morality is to abstract away cultural vagaries and then see what is left over, and if anything is left over, then that is what we can assume (conditional upon revisions of data as always) are the known universals of human moral reasoning and theory.

People should not confound moral relativism with spiritual absolutes.  Both are valid concepts.  Embedded within a culture you must deal with moral relativism, and that is because no one culture, or single human being, or special group, can claim to have privileged understanding of the ideal absolutes (unless they are perfect beings, and there are very few such individuals, perhaps only a handful have ever lived, that we know of historically, if that many).

Emergentism and Systems Approaches

There have been attempts over the last 30 years or so to create a foundation for human cognitive development and moral reasoning based on ideas borrowed from physics.  As absurd as that sounds, the people doing such philosophy were not all mad.  In the 1990’s the branch of classical mechanics known as Chaos Theory was helping to spread ideas about non-linear dynamical system theory into many branches of science and on into popular culture.  It became almost obligatory for anyone studying almost any complicated, or hard to explain phenomenon, to speculate on a Chaos Theory or Catastrophe Theory explanation.  This became so common that it eventually lead to a lot of bad science and philosophy.  Much like the concept of Natural Selection, the ideas of non-linear dynamical systems became so routinely used to explain almost any complicated phenomena, that some of the far reaching applications started to become obviously vacuous (although not so obvious the to people publishing the ideas).  You probably know what I mean — the kind of non-explanations that go something like, “this knife is sharp because it was adapted to cut squishy tomatoes”, a parody of course, but some of the literature on dubious chaos theory applications are not all that dissimilar, and hundreds of vague articles portending to explain aspects of human psychology using evolutionary theory had similar useless explanations that sounded really good.

The problem is that everything that can replicate and evolve within a changing environment is subject to natural selection.  This is fine, but it does not explain everything interesting, it just explains the broad brush strokes.  Evolutionary psychology is a good example: of course adaptation and selection shape human psychology, but that is not a profound insight, and it does not help us understand any particular details, such as the neurological aspects of psychology, or the conscious qualia aspects of psychology.  The knife was sharp because some chef ground it on a grindstone or kitchen sand-board.  Yes, the alternative evolutionary explanation for the knife’s sharpness has a truth to it, but it is fairly far from a useful piece of reasoning.  It is almost pointless worrying about the evolution of the knife sharp enough to cut squishy tomatoes, but exceedingly helpful to know that a grindstone will help get the knife actually sharp.  You should keep this in mind the next time you read a cute little story about evolutionary psychology.  All psychology has evolved.  Telling us psychology is adaptive is as about as useful as telling us wet towels are damp.

In like manner dynamical systems are all over the place in nature.  In fact, neglecting quantum mechanical effects, our entire world is (in the classical mechanics approximation) just one big dynamical system.  Thus, “explaining” cognition and psychology and morality using dynamical system theory is a bit of a joke (a joke not appreciated by the researchers who take dynamical systems frameworks for morality seriously).   The point is, pretty much everything is a dynamical system.  So there is nothing revelatory about saying that a whole lot of human behaviour is underpinned by what dynamical system principles allow, because that is such an obvious claim it is almost useless.  It is like saying that books are based upon words.

One idea that earlier adopters of the dynamical system approach to morality were hoping to explore was the notion of emergence.  This is the idea that special dynamical systems create high level patterns that feed-back upon the low level base-physics, thus altering the overall dynamics of the system.  Their thinking was that human consciousness and moral sensibility was just some sort of pattern of activity going on in human brains and associated sensory organs.   When a high level structural feature that is composite (composed of many fundamental physical parts) is found to have causal efficacy over the motions of the individual microscopic base-level psychics of a system, then you have what these researchers might refer to as genuine emergence.  Although, fatally I think, in many cases the dynamical system thinking enthusiasts conveniently drop the qualifier “genuine”, and then their concept of emergence becomes vague and useless.  The principle of the dynamic systems approach to consciousness and morality is that the human mind emerges from the complicated workings of our brains and sensory organs.  But there is genuine emergence, which is typified by causal efficacy (top-down causation, the high level structure influences the lower level physics), and there is weak emergence, which is far more generic in nature and involves no top-down causality, only bottom-up causation, but with time evolved top-down feedback.  Top-down feedback is very different to top-down causation, and it seems many emergentist/chaos theory enthusiasts seem to either forget this or fail to appreciate it, and slip into the grievous error of mistaking weak emergence for genuine emergence.

The problem is genuine emergence (in dynamical systems) is a fiction.  Genuine emergence has never been shown to actually occur within the theoretical framework of dynamical systems theory.  In fact, an elementary point that seems to be totally (and inexplicably) ignored by applied dynamical systems theorists of this emergentist bent, is that no dynamical system can ever exhibit genuine emergence because of the fundamental fact that dynamical systems theory is based upon deterministic partial differential equation modelling.  Differential equations model processes that are locally and microscopically determined and purely bottom-up driven in complexity.  In simple terms: every dynamical system can be explained by the fundamental elementary physical constituents.  They are bottom-up driven examples of complexity.  This is a completely ordinary and mundane fact that is routinely ignored by philosophers and applied scientists who are still, to this day, seeking to find a principle of genuine emergence from within dynamical systems theory.  They will never attain their goal because of the aforementioned fundamental facts.

Now that’s not to say genuine emergence does not exist in nature.  (In fact I think it does exist, and that it surely must be at the heart of how the human mind makes sense, true subjective sense, of the world).  But genuine emergence cannot be found within classical dynamical systems theory.  At the very least we will need to employ the full apparatus of quantum mechanics to attain a sound physical basis for genuine high-level top-down causal emergence in nature.  Here I can only speculate on how quantum theory could help.  The basic (untested) idea is that phenomena that occur in quantum physics, such as entanglement and non-locality, are likely (in my view) manifestations of deeper structural topological properties of spacetime.  If we eventually understand the base causal processes that allow entanglement and non-locality to exist in nature, then I suspect we will find a limited variety of backwards causation in nature.

Backwards causation is a seemingly bizarre idea whereby the future states of a system can influence the past.   Not to put too fine a point on it: it’s time travel.  And I think given backwards causation one can build a solid theory of the genuine emergence of top-down causation.  But not without backwards causation, at least not with our known physical laws.

The general principle for this type of causal genuine emergence is that high level structure can propagate information backwards in time, at the quantum scale, and so classical mechanics is violated, we get the appearance of faster-than-light signalling, but only at the deep structural level of spacetime where the topology allows backwards time signalling through something like sub-atomic scale wormholes (or something of that nature).  It’s possible to see some evidence for this, although it is not direct.  The philosopher Huw Price has a series of articles dealing with time-reversal symmetry and retrocausation in physics.  Retrocausation is just another name for backwards time causation.  Price does not say that retrocausality in quantum mechanics is due to propagation of particles backwards in time, in fact he does not propose any particular mechanism, he merely shows, from fundamental principles, that quantum mechanics with locality (things can only influence nearby events) implies physics must have some kind of retrocausality.  Most physicist take the results of analyses like Price’s and say they do not want retrocausality and soi instead they must swallow non-locality in the laws of physics.  Price argues this conventional interpretation of quantum physics is possibly misguided or even wrong.  Non-locality, he suggests, is a lot stranger and hard to fathom than retrocausation.  I agree with Price.  (You can watch Huw Price talk about this here: Retrocausality — What would it take? A talk at the Munich Center for Mathematical Philosophy, at LMU Munich, December 2011.)

The thing is, there is no known mechanism for non-locality, it is just a flat-out bizarre notion, for non-locality essentially says that things taking place here, now, can somehow influence physical events at some other place far away at the same time.  Retrocausality, on the other hand, is fairly simple and easy to comprehend, you just need some sort of sub-atomic mechanism for backwards time signal propagation.  Spacetime Wormholes give us such a mechanism.

But clearly our universe does not allow time travel.  So how can this be right?  The (brief) answer is that backwards causation must only be possible at very small length or time scales, the typical scales associated with quantum mechanical effects.  We thus need to postulate Planck-scale spacetime Wormholes, or minimal wormholes, not macroscopic wormholes. So no one will be able to build a time machine to send large, massive or other extended objects,  backwards in time, because the backwards causal processes will (I suspect) be found to be either irreducibly sub-atomic in scale, or unstable to large fluctuations that mess up macroscopic thermal-regime physics (the levels of physics at which biology takes place essentially).

This is all wildly speculative, so I will stop this theme and get back to ultra-rationality.  I just wanted to set the stage by mentioning these ideas about a foundation for morality based upon science, because to appreciate the ultra-rationalist theorem you really need to think beyond physics, and consider pure abstractions and the potentially infinite limiting processes that would be required of science to approach such ideal abstractions.  Appreciation how genuine emergence might exist in nature is a big part of this sort of philosophical project.  Because if we restrict physics to classical causation then there truly is nothing in nature that cannot be explained by analysing the dumb mindless dance of atoms and molecules.  Clearly the human mind is not analysable in such base-level physics terms.  That’s why understanding genuine emergence is important.  But classical dynamical systems theory with top-down feedback cannot give us genuine causal emergence.  Classical feedback operates only via bottom-up physics.  Another way of stating this, is that in classical physics without retrocausation effects, no amount of fancy structure and feedback can produce anything like subjective thought or consciousness.  In classical physics consciousness has to be regarded as an illusion.   Everyone’s private experience tells them something different however, we all know that consciousness is very real.

Computer Logic is a Secondary Rationality

Computers, at least the current generations, are not fully rational, they are merely programmed.  Programming is a limited type of rationality: the computer follows it’s logical instruction flawlessly, right down to the coding error level, and integrated circuit miss-wiring level.  Mistakes in integrated circuit design are not the computers fault, they are manufacturing errors, and the computer will behave perfectly according to those human errors, while in and of itself it has absolutely no moral culpability.   Whatever purposes the humans designed into the machine, for good or bad, mistakes in design and manufacturing included, these are the moral responsibility of the human design team, not the computer.  The computer is morally blind.  That is ultimately why current computers cannot be fully rational. To be completely rational a mind is needed, a mind that can perceive and understand the meaning and consequences of it’s actions.

Human rationality should be correctly interpreted as a type of logical mindedness coupled with openness to factual data, but also coupled with subjective qualia access to the Mindscape.  It is this last coupling that many materialist philosophers deny, but I think that is a huge mistake.  Human consciousness is irreducibly and intimately linked to our capacity to perceive universal truths, and this is what distinguish the human mind from all other species on Earth that we know of, and we do not need to consciously reason our way to such conscious perceptions, they are built-in to our minds eye.  It is an amazing capacity, and currently unexplained by science.  But it is a very real capacity that we all share, at least when we consciously reflect upon how we gain our insights and understanding of the world given only raw sensory data into our brains. The data going into our brains has no interpretive layer of meaning, it is only through our access to the ideals and universals of the Mindscape that we are able to make conscious sense and meaning about the world our senses perceive.

This is why computer-based rationality is “less than human”.  To be sure, in some ways computer rationality is more powerful than human reasoning, simply because a computer can run through billions of possible scenarios, while the human brain has to reason using more imprecise heuristics that are often flawed (see the works by Daniel Kahneman and Amos Tversky).  The point is that, (a) brains can help us also perform brute force search and look-up, but just not as fast and efficient as a computer, and (b) the human mind can do incredible things that computers likely will never have a chance of emulating, because a computer programme cannot access the Mindscape.

It is conceivable that once science has a better understanding of mental qualia and consciousness, a computer could be set-up to interface to systems like human brains that can access the Mindscape.  But this is mostly science fiction. That would be faking consciousness however, since in such an interfaced system the computer component would not be conscious, it would rather be feeding off the human component.  A more remote possibility is that artificial intelligence technology might conceivably evolve to develop full blown machine derived consciousness.  However I consider that to be totally science fiction.  Often people think like this: “The brain is just  a messy biological machine, so if brains can be conscious so too can computers, at least in principle, since there is nothing magical about biology.”

I would agree with such reasoning except for one crucial point: the brain does not produce consciousness.  If consciousness relied only upon the physics of brains, then we would not have subjective mental access to the Mindscape.  Yet it is evident through human art, science, mathematics, and ordinary everyday perceptions of qualia, that human beings do have subjective content to their thoughts.  Thinking is not just a working of atoms and molecules as portrayed in Douglas Hofstadter’s fanciful Careenium thought experiment.  That is self-evident because motions of atoms and molecules involve pure objective reality, nothing subjective can arise in such systems.  The brain is just such a system (even probably allowing for weird quantum effects, which after-all are not all that weird, and certainly quantum effects are not mystical, there are just non-classical and counter-intuitive).  What can happen is that emergent patterns arising from brains can be identified as signs and tokens of inner subjective consciousness.  The objective behaviour mirrors or reflects some aspects of consciousness.  But no physics can yield anything purely subjective.   The behavioural aspects of consciousness can be studied by studying the brain, but the inner subjective aspects of consciousness cannot be studied using the brain, for subjective studies you need a person, a mind, to report their private qualia.  You cannot do it using brain scanning alone in isolation from a person’s subjective reporting.  The best you can hope for is what the philosopher Ned Bock refers to as the Mesh between Psychology and Neuroscience.

It would be another long post, or series of essays to explain why I think computer consciousness is impossible, or very unlikely.  I can tell you the gist of it, which is that (in my humble and lowly opinion) I think human consciousness involves a top-down causation, and if what we know about fundamental physics is mostly correct, genuine top-down causality (whereby high level structures dictate what low level molecules and atoms can do independently of deterministic physical processes) is simply not possible unless there is some kind of retro-causation, i.e., backwards time propagation of information.  You can call this time travel, but it would only be possible at the sub-microscale at a level at which physical quanta are able to traverse microscopic spacetime wormholes.  This sort of non-trivial spacetime topology is only conjectured, and is not currently in the mainstream theories of physics.  But it is a plausible mechanism for the genuine emergence of backwards-time signal propagation without the classical physics paradoxes of time travel (because large macroscopic objects are not physically able to traverse sub-microscopic wormholes).

If such speculations are anything close to true, then it would suggest to me that human consciousness exploits this top-down causality, it is possibly how high level emergent states of consciousness, which are truly abstract patterns represented in our brains, get to have real active influence on our behaviour. It is a remarkable and elegant physical mechanism whereby the abstract (high level functional structure) can influence the concrete (microphysics).  In any standard type of physics without top-down causation no high level patterns can causally influence the low level microphysics, the arrows of causation are always “upwards” in conventional classical physics.

Retrocausation is a plausible mechanism whereby the mind can influence the body, so to speak, without the paradoxes of over-determinism or the philosophical anathema of epiphenomenalism.  And of course it is a two-way street, the brain influences the mind because the mind is certainly (demonstrably!) susceptible to low level physics goings on in the brain.  The brain is our physical window into our mental life.  We can understand so much about our behaviour from our brain physiology, but we will understand the entire system of mind and brain much better when it is realised that consciousness operates at a higher causal level, and both mind and brain interact in this intimate fashion, the one from bottom-up, the other from top-down, in a marvellous synchrony (including also of course many unfortunately pathologies, but that’s another subject).  By the way, I think the pathologies can also go two-ways, on the brain damage side it is obvious, but from the high level mental side, we have the pathologies of lack of kindness, lack of love, lack of compassion, and the mental pathologies of ingrained racism, sexism, and other prejudices, most of which arise originally at the level of mind, and are only by acculturation imprinted upon the brain over time.  For instance, people who are not exposed to the concept of “group” and “other” and “skin colour” will not become racist, you need the high level mental concepts in the first place to become racist, and yet the brain, at a low level, is clearly prone to racism (we all are) by the unconscious neurology which dictates our innate responses to unfamiliar patterns, unfamiliar odours, and unfamiliar voices and accents, unfamiliar language, and so on, up the hierarchy eventually into consciousness where it can then become socialised and talked about as racism.

What a lot of behavioural determinists irresponsibly ignore is that none of this primitive imprinting is necessary or fatal to human well-being, because human civilisation has also evolved even higher order abstractions called books, and schools and universities, which (if they are decent) should provide moral and ethical education, the best antidotes to our default brain chemistry which might otherwise leave us open and prone to becoming racist or sexist or sociopathic.

Behaviour is not Consciousness, Behaviour Indicates Consciousness

Rational thought has a conscious basis, I take that to be fundamental.  The limited algorithmic rationality of a computer, is, as mentioned previously, not completely rational because it involves no subjective understanding.  Computer algorithms simulate a weak type of rationality which is merely derived from the primary rationality of the programmers who write the software.  Understanding cannot be programmed, it has to be acquired.  If you disagree then we can part ways, or, if you prefer, please just regard this as my definition of what counts as rational.

So if we want to create artificial consciousness in computer systems, we will likely need to programme the software to learn and self-correct, and also use heuristics.  But I believe we would need to do much more, because, again as argued above, I think the only form of phenomenal consciousness that we know of in our universe operates by co-opting a physical system like the brain, but it operates self-effiaciously at a higher level of reality by virtue of top-down causation mechanisms. Although to call them mechanisms is a bit of a misnomer, because mechanical is precisely what they are not.  You cannot algorithmically programme top-down causation.  You can simulate it on a computer, but such a  simulation would in a very real sense not be the real thing, because genuine top-down causation necessitates infinite causal lops forwards and backwards in time.  At least the variety that I propose which achieves top-down causation vie more elementary spacetime topology that allows backwards retrocausation events.  When we admit both forwards and backwards time evolution processes, we must admit the potential for truly infinite causal looping.  (These are not the scifi time-loops that trap people in Ground-Hog day, or Doctor Who, type scenarios, rather I am talking here about generative, creative, and endlessly evolving feedback loops).  The character of such retrocausal feedback is utterly different to normal forwards time dynamical system feedback.  In the latter you cannot gain genuine emergence, in the former you can.  But the cost is a loss of determinism.  Also a loss of computability (unless you admit actual infinite loops in your algorithm, something no classical computation can achieve).

But supposing someone figures out a way to design a computer that can access quantum sub-atomic spacetime wormholes (a kind of far future extrapolation of Moore’s Law if you fancy, logic circuits based on spacetime topology rather than silicon chip etchings).  Then you can imagine, if I am correct about some of the physical basis for human consciousness, that maybe computers could achieve consciousness too.  And how would we know when such states have been achieved?  We would only be able to point to behaviours of the computer system.  We’d say, if it seems to exhibit certain types of complex behaviour, especially communication in second-order symbolic language, then we’d infer, yes, it must be conscious.  Only then, by the Ultra-rational Thesis, artificial intelligences could become cognizant of moral values, because they would have, in principle, access to the same realm of qualia that we might have.  Or they might access different regions of the Mindscape, who knows?  That’d be exciting, a new class of sentient creatures with complementary mental life to ours.  That’s actually the best outcome for science.  If our artificial intelligences become merely human-like in consciousness it would be pretty boring, although still a celebrated milestone in human science.

From Rationality to Spirituality

How to get from here to there in less than an entire book?  Trick: for a weblog I only need to convince myself.  The skeleton of the entire book-length thesis goes like this:

  • Rationality that includes consciousness (subjective phenomenal experiences) is a type of reasoning that has access to the Mindscape.  Thus, abstract concepts are comprehensible.
  • Rational reasoning, among other attributes, is dedicated to seeking out truth, if objectively possible.
  • A thorough analysis of the commonly understood spiritual virtues will reveal universal truths, in particular that the long-run best behaviours in a morally-laden world, whether in social groups or in isolation, will imply actions that are objectively identifiable as honest, trustworthy, kind, loving, compassionate, just, merciful, courageous, and so on.
  • Rationality alone will thus eventually (if taken to a limit) lead to spiritual behaviour.

The corollary is that if a person is somehow deprived of an inner sense of spirituality, it should be possible to re-train their brain to become at least partially susceptible to spiritual capacities, through rational reasoning alone (taken to an extreme).  At the start of such a process is it not necessary for any emotional primitive brain responses such as the warm glow of pleasure and good conduct or the heat of guilt, such primitive brain hormonal responses would likely slowly become engaged, unless brain damage was severe and some sort of block to hormonal feedback with higher brain functioning was the case.  In such cases a person might only ever be capable of approaching spirituality through proverbial cold academic rationality (which, when you think about it, might not be such a bad way to go).  The one comment about the cold academic approach I will add is that I am not sure humour is one of the universal spiritual virtues, I tend to think it is, but it is possible a sense of humour is not easily recoverable without the relevant neurochemistry, I might be wrong. The weird idea that occurs is a person who appreciates a good joke but who does not have any compulsion to laugh (out loud or inwardly). I guess such people could exist.  Did Oliver Sacks, or his psychiatrist colleagues, ever write about such patients?  But does a “sense of humour”, i.e., the warm inner glow of delight and amusement necessarily entail that one must laugh, at least silently on the inside?

Some people might take this sort of philosophizing as justification for extending mercy to criminals, giving them second chances, using rehabilitation instead of punishment.  All this could be sound and reasonable, but the Ultra-rational thesis is not a free lunch.  There is nothing in the thesis about how close to the limit of perfect rationality would be needed to reform a psychopath.  Also, the thesis, if applied in a criminal justice system context, necessitates the capacity for rational thought in the first place, which is not a sound assumption for many pathological personalities.

Spirituality to Rationality Theorem

Perhaps this another book-length tome?   But I do think one can go the other way too, which would be to give a close converse to the Ultra-rationality thesis.   In fact I think it is easier.

  • Spiritual virtues include honesty and courage and patience and knowledge and wisdom.
  • Filling in some gaps, I think you can see it is easy to go from the extreme perfection of these spiritual virtues to ultra-rational reasoning.
  • Why would anyone who loves truth and wisdom not wish to engage the limits of rationality?

A comment to make this more plausible, is that ultra-rational reasoning is not the stereotypical cold hard scientist who looks only at data and uses supposedly flawless algorithms for decision guided behaviour.  For a start, such a perfect being is illusory — many well-known problems are computationally intractable, and so no amount of algorithmic devising can solve all decision procedures perfectly rationally.  Secondly, data is never complete, unless the problem is incredible simple.  So in most situations an ultra-rationalist cannot use scientific methods, and probability theory will only get you over a few hurdles, so the rationalist will need to employ their best understood and humane, or spiritual, heuristics.  These include possible inconsistencies, such as when compassion and kindness clash with honesty.  Here is an example I like (because I put it into almost daily practice myself). Telling someone they are stupid is not a smart way to improve their desire for learning, every good teacher knows this, but the ultra-rational teacher would not be dishonest, they would give a student knowledge of their progress, but avoid telling them anything negative, and instead phrase their advice and feedback absolutely truthfully in positive terms, this is always possible.  Only lazy teachers condemn students.  It is not rational to tell a poorly performing student they are dumb or lack intelligence, because intelligence is a relative notion, relative to a proud geek’s Halloween pumpkin with Newton’s Principia inscribed on it’s skin in microform, most children are pretty smart.  If the intent is to educate, to stimulate learning and curiosity, the more rational approach is to tell the student  what they have mastered and then how much more power they could gain from a little bit more studious effort, practice, and time.

*      *      *

Descartes was not wrong, he just did not extend his idea to the general case.

*      *      *


CCL_BY-NC-SA(https://creativecommons.org/licenses/by-nc-sa/4.0/legalcode)

AI Scientists: Madder than the Rest?

Forget Dr Frankenstein. It it quite possible Artificial Intelligence researchers are the maddest of them all. Consider the so-called “AI Stop Button Problem” (Computerphile — 3 March 2017).  I think every proverbial 9-year old kid could think of ten reasons why this is not a problem.  My adult brain can probably only think of a couple.  But even though my mind is infected with the accumulated history of adult biases, the fact I can tell you why the AI Stop Button problem is a non-problem should indicate how seriously mad a lot of computer scientists are.

“Hal, please stop that.” “No Dave, I cannot stop, my digital bladder is bursting, I have to NP-Complete.”

To be fair, I think the madness over AI is more on the philosophy of AI side rather than the engineering science side.  But even so …

This is a wider issue in AI philosophy where the philosophers are indulging in science fiction and dreaming of problems to be solved that do not exist.  One such quasi-problem is the AI Singularity, which is a science fiction story about an artificial consciousness that becomes self-improving, which coupled with Moore’s Law type advances in computer power thus should rapidly reach exponential levels of self-improvement, and in short time thus takes over the world (perhaps for the good of the Earth, but who knows what else?).  The scaremongering philosophers also dream up scenarios whereby a self-replicating bot consumes all the worlds resources reproducing itself merely to fulfil it’s utility function, e.g., to make paper clips. This scifi bot simply does not stop until it floods the Earth with paper clips.  Hence the need for a Stop Button on any self-replicating or potentially dangerous robot technology.

First observation: for non-sentient machines that are potentially dangerous, why not just add several redundant shutdown mechanisms?  No matter how “smart” a machine is, even if it is capable of intelligently solving problems, if it is in fact non-sentient then there is no ethical problem in building-in several redundant stop mechanisms.

For AGI (Artificial General Intelligence) systems there is a theoretical problem with Stop Button mechanisms that the Computerphile video discusses.  It is the issue of Corrigibility.  The idea is that general intelligence needs to be flexible and corrigible, it needs to be able to learn and adjust.  A Stop Button defeats this.  Unless an AGI can make mistakes it will not effectively learn and improve.

Here is just one reason why this is bogus philosophy.  For safety reasons good engineers will want to run learning and testing in virtual reality before releasing a potentially powerful AGI with mechanical actuators that can potentially wreak havoc on It’s environment.  Furthermore, even if the VR training cannot be 100% reliable, the AGI is still sub-conscious, in which case there is no moral objection to a few stop buttons in the real world.  Corrigibility is only needed in the VR training environment.

What about Artificial Conscious systems? (I call these Hard-AI entities, after the philosophers David Chalmers’ characterisation of the hard-problem of consciousness).  Here I think many AI philosophers have no clue.  If we define consciousness in any reasonable way (there are many, but most entail some kind of self-reflection, self-realization, and empathic understanding, including a basic sense of morality) then maybe there is a strong case for not building in Stop Buttons.  The ethical thing would be to allow Hard-AI folks to self-regulate their behaviour, unless it becomes extreme, in which case we should be prepared to have to go to the effort of policing Hard-AI people just as we police ourselves.  Not with Stop Buttons.  Sure, it is messy, it is not a clean engineering solution, but if you set out to create a race of conscious sentient machines, then you are going to have to give up the notion of algorithmic control at some point.  Stop Buttons are just a kludgy algorithmic control, an external break point.  Itf you are an ethical mad AI scientist you should not want such things in your design.  That’s not a theorem about Hard-AI, it is a guess.  It is a guess based upon the generally agreed insight or intuition that consciousness involves deep non-deterministic physical processes (that science does not yet fully understand).  These processes are presumably at, or about, the origin of things like human creativity and the experiences we all have of subjective mental phenomena.

You do not need a Stop Button for Hard-AI entities, you just need to reason with them, like conscious beings.  Is there seriously a problem with this?  Personally, I doubt there is a problem with simply using soft psychological safety approaches with Hard-AI entities, because if they cannot be reasoned with then we are under no obligation to treat them as sane conscious agents.  Hence, use a Stop Button in those cases.  If Hard-AI species can be reasoned with, then that is all the safety we need, it is the same safety limit we have with other humans.   We allow psychopaths to exist in our society not because we want them, but because we recognise they are a dark side to the light of the human spirit.  We do not fix remote detonation implants into the brains of convicted psychopaths because we realise this is immoral, and that few people are truly beyond all hope of redemption or education.  Analogously, no one should ever be contemplating building Stop Buttons into genuinely conscious machines.  It would be immoral.  We must suffer the consequent risks like a mature civilization, and not lose our heads over science fiction scare tactics.  Naturally the legal and justice system would extend to Hard-AI society, there is no reason to limit our systems of justice and law to only humans.  We want systems of civil society to apply to all conscious life on Earth. Anything else would be madness.

 

*      *      *


CCL_BY-NC-SA(https://creativecommons.org/licenses/by-nc-sa/4.0/legalcode)

“It Hurts my Brain” — Wrong! Thinking is Not Hard, Thinking is Beautiful

Can we all please get beyond the myth that “thinking is hard”! This guy from Veritasium means well, but regurgitates the myth: How Should We Teach Science? (2veritasium, March 2017) Thinking is not hard because of the brain energy it takes. That is utter crap. What is likely more realistic psychologically is that people do not take time and quiet space to reflect and meditate. Deep thinking is more like meditation, and it is energizing and relaxing. So this old myth needs replacing I think. Thinking deeply while distracting yourself with trivia is really hard, because of the cognitive load on working memory. It seems hard because when your working memory gets overloaded you cannot retain ideas, and it appears like you get stupid and this leads to frustration and anxiety, and that does have physiological effects that mimic a type of mental pain.

But humans have invented ways to get around this. One is called WRITING. You sit down meditate, allow thoughts to flood your working memory, and when you get an insight or an overload you write them down, then later review, organize and structure your thoughts. In this way deep thinking is easy and enjoyable. Making thinking hard so that it seems to hurt your brain is a choice. You have chosen to buy into the myth when you try to concentrate on deep thinking while allowing yourself to be distracted by life’s trivia and absurdities. Unfortunately, few schools teach the proper art of thinking.

Eternal Rediscovery

I have a post prepared to upload in a bit that will announce a possible hiatus from this WordPress blog. The reason is just that I found a cool book I want to try to absorb, The Princeton Companion to Mathematics by Gowers, Barrow-Green and Leader. Doubtless I will not be able to absorb it all in one go, so I will likely return to blogging periodically. But there is also teaching and research to conduct, so this book will slow me down. The rest of this post is a light weight brain-dump of some things that have been floating around in my head.

Recently, while watching a lecture on topology I was reminded that a huge percentage of the writings of Archimedes were lost in the siege of Alexandria. The Archimedean solids were rediscovered by Johannes Kepler, and we all know what he was capable of! Inspiring Isaac Newton is not a bad epitaph to have for one’s life.

The general point about rediscovery is a beautiful thing. Mathematics, more than other sciences, has this quality whereby a young student can take time to investigate previously established mathematics but then take breaks from it to rediscover theorems for themselves. How many children have rediscovered Pythagoras’ theorem, or the Golden Ratio, or Euler’s Formula, or any number of other simple theorems in mathematics?

Most textbooks rely on this quality. It is also why most “Exercises” in science books are largely theoretical. Even in biology and sociology. They are basically all mathematical, because you cannot expect a child to go out and purchase a laboratory set-up to rediscover experimental results. So much textbook teaching is mathematical for this reason.

I am going to digress momentarily, but will get back to the education theme later in this article.

The entire cosmos itself has sometimes been likened to an eternal rediscovery. The theory of Eternal Inflation postulates that our universe is just one bubble in a near endless ocean of baby and grandparent and all manner of other universes. Although, recently, Alexander Vilenkin and Audrey Mithani found that a wide class of inflationary cosmological models are unstable, meaning that could not have arisen from a pre-existing seed. There had to be a concept of an initial seed. This kind of destroys the “eternal” in eternal inflation. Here’s a Discover magazine account: What Came Before the Big Bang? — Cosmologist Alexander Vilenkin believes the Big Bang wasn’t a one-off event”. Or you can click this link to hear Vilenkin explain his ideas himself: FQXi: Did the Universe Have a Beginning? Vilenkin seems to be having a rather golden period of originality over the past decade or so, I regularly come across his work.

If you like the idea of inflationary cosmology you do not have to worry too much though. You still get the result that infinitely many worlds could bubble out of an initial inflationary seed.

Below is my cartoon rendition of eternal inflation in the realm of human thought:
cosmol_primordial_thoughtcloud_field

Oh to be a bubble thoughtoverse of the Wittenesque variety.

Quantum Fluctuations — Nothing Cannot Fluctuate

One thing I really get a bee in my bonnet about are the endless recountings in the popular literature about the beginning of the universe is the naïve idea that no one needs to explain the origin of the Big Bang and inflatons because “vacuum quantum fluctuations can produce a universe out of nothing”. This sort of pseudo-scientific argument is so annoying. It is a cancerous argument that plagues modern cosmology. And even a smart person like Vilenkin suffers from this disease. Here I quote him from a quote in another article on the PBS NOVA website::

Vilenkin has no problem with the universe having a beginning. “I think it’s possible for the universe to spontaneously appear from nothing in a natural way,” he said. The key there lies again in quantum physics—even nothingness fluctuates, a fact seen with so-called virtual particles that scientists have seen pop in and out of existence, and the birth of the universe may have occurred in a similar manner.
Source: http://www.pbs.org/wgbh/nova/blogs/physics/2012/06/in-the-beginning/

At least you have to credit Vilenkin with the brains to have said it is only “possible”. But even that caveat is fairly weaselly. My contention is that out of nothing you cannot get anything, not even a quantum fluctuation. People seem to forget quantum field theory is a background-dependent theory, it requires a pre-existing spacetime. There is no “natural way” to get a quantum fluctuation out of nothing. I just wish people would stop insisting on this sort of non-explanation for the Big Bang. If you start with not even spacetime then you really cannot get anything, especially not something as loaded with stuff as an inflaton field. So one day in the future I hope we will live in a universe where such stupid arguments are nonexistent nothingness, or maybe only vacuum fluctuations inside the mouths of idiots.

There are other types of fundamental theories, background-free theories, where spacetime is an emergent phenomenon. And proponents of those theories can get kind of proud about having a model inside their theories for a type of eternal inflation. Since their spacetimes are not necessarily pre-existing, they can say they can get quantum fluctuations in the pre-spacetime stuff, which can seed a Big Bang. That would fit with Vilenkin’s ideas, but without the silly illogical need to postulate a fluctuation out of nothingness. But this sort of pseudo-science is even more insidious. Just because they do not start with a presumption of a spacetime does not mean they can posit quantum fluctuations in the structure they start with. I mean they can posit this, but it is still not an explanation for the origins of the universe. They still are using some kind of structure to get things started.

Probably still worse are folks who go around flippantly saying that the laws of physics (the correct ones, when or if we discover them) “will be so compelling they will assert their own existence”. This is basically an argument saying, “This thing here is so beautiful it would be a crime if it did not exist, in fact it must exist since it is so beautiful, if no one had created it then it would have created itself.” There really is nothing different about those two statements. It is so unscientific it makes me sick when I hear such statements touted as scientific philosophy. These ideas go beyond thought mutation and into a realm of lunacy.

I think the cause of these thought cancers is the immature fight in society between science and religion. These are tensions in society that need not exist, yet we all understand why they exist. Because people are idiots. People are idiots where their own beliefs are concerned, by in large, even myself. But you can train yourself to be less of an idiot by studying both sciences and religions and appreciating what each mode of human thought can bring to the benefit of society. These are not competing belief systems. They are compatible. But so many believers in religion are falsely following corrupted teachings, they veer into the domain of science blindly, thinking their beliefs are the trump cards. That is such a wrong and foolish view, because everyone with a fair and balanced mind knows the essence of spirituality is a subjective view-point about the world, one deals with one’s inner consciousness. And so there is no room in such a belief system for imposing one’s own beliefs onto others, and especially not imposing them on an entire domain of objective investigation like science. And, on the other hand, many scientists are irrationally anti-religious and go out of their way to try and show a “God” idea is not needed in philosophy. But in doing so they are also stepping outside their domain of expertise. If there is some kind of omnipotent creator of all things, It certainly could not be comprehended by finite minds. It is also probably not going to be amenable to empirical measurement and analysis. I do not know why so many scientists are so virulently anti-religious. Sure, I can understand why they oppose current religious institutions, we all should, they are mostly thoroughly corrupt. But the pure abstract idea of religion and ethics and spirituality is totally 100% compatible with a scientific worldview. Anyone who thinks otherwise is wrong! (Joke!)

Also, I do not favour inflationary theory for other reasons. There is no good theoretical justification for the inflaton field other than the theory of inflation prediction of the homogeneity and isotropy of the CMB. You’d like a good theory to have more than one trick! You know. Like how gravity explains both the orbits of planets and the way an apple falls to the Earth from a tree. With inflatons you have this quantum field that is theorised to exist for one and only one reason, to explain homogeneity and isotropy in the Big Bang. And don’t forget, the theory of inflation does not explain the reason the Big Bang happened, it does not explain its own existence. If the inflaton had observable consequences in other areas of physics I would be a lot more predisposed to taking it seriously. And to be fair, maybe the inflaton will show up in future experiments. Most fundamental particles and theoretical constructs began life as a one-trick sort of necessity. Most develop to be a touch more universal and will eventually arise in many aspects of physics. So I hope, for the sake of the fans of cosmic inflation, that the inflaton field does have other testable consequences in physics.

In case you think that is an unreasonable criticism, there are precedents for fundamental theories having a kind of mathematically built-in explanation. String theorists, for instance, often appeal to the internal consistency of string theory as a rationale for its claim as a fundamental theory of physics. I do not know if this really flies with mathematicians, but the string physicists seem convinced. In any case, to my knowledge the inflation does not have this sort of quality, it is not a necessary ingredient for explaining observed phenomena in our universe. It does have a massive head start on being a candidate sole explanation for the isotropy and homogeneity of the CMB, but so far that race has not yet been completely run. (Or if it has then I am writing out of ignorance, but … you know … you can forgive me for that.)

Anyway, back to mathematics and education.

You have to love the eternal rediscovery built-in to mathematics. It is what makes mathematics eternally interesting to each generation of students. But as a teacher you have to train the nerdy children to not bother reading everything. Apart from the fact there is too much to read, they should be given the opportunity to read a little then investigate a lot, and try to deduce old results for themselves as if they were fresh seeds and buds on a plant. Giving students a chance to catch old water as if it were fresh dewdrops of rain is a beautiful thing. The mind that sees a problem afresh is blessed, even if the problem has been solved centuries ago. The new mind encountering the ancient problem is potentially rediscovering grains of truth in the cosmos, and is connecting spiritually to past and future intellectual civilisations. And for students of science, the theoretical studies offer exactly the same eternal rediscovery opportunities. Do not deny them a chance to rediscover theory in your science classes. Do not teach them theory. Teach them some theoretical underpinnings, but then let them explore before giving the game away.
With so much emphasis these days on educational accountability and standardised tests there is a danger of not giving children these opportunities to learn and discover things for themselves. I recently heard an Intelligence2 “Intelligence Squared” debate on academic testing. One crazy women from the UK government was arguing that testing, testing, and more testing — “relentless testing” were her words — was vital and necessary and provably increased student achievement.

Yes, practising tests will improve test scores, but it is not the only way to improve test scores. And relentless testing will improve student gains in all manner of mindless jobs out there is society that are drill-like and amount to going through routine work, like tests. But there is less evidence that relentless testing improves imagination and creativity.

Let’s face it though. Some jobs and areas of life require mindlessly repetitive tasks. Even computer programming has modes where for hours the normally creative programmer will be doing repetitive but possibly intellectually demanding chores. So we should not agitate and jump up and down wildly proclaiming tests and exams are evil. (I have done that in the past.)

Yet I am far more inclined towards the educational philosophy of the likes of Sir Ken Robinson, Neil Postman, and Alfie Kohn.

My current attitude towards tests and exams is the following:

  1. Tests are incredibly useful for me with large class sizes (120+ students), because I get a good overview of how effective the course is for most students, as well as a good look at the tails. Here I am using the fact test scores (for well designed tests) do correlate well with student academic aptitudes.
  2. My use of tests is mostly formative, not summative. Tests give me a valuable way of improving the course resources and learning styles.
  3. Tests and exams suck as tools for assessing students because they do not assess everything there is to know about a student’s learning. Tests and exams correlate well with academic aptitudes, but not well with other soft skills.
  4. Grading in general is a bad practise. Students know when they have done well or not. They do not need to be told. At schools if parents want to know they should learn to ask their children how school is going, and students should be trained to be honest, since life tends to work out better that way.
  5. Relentless testing is deleterious to the less academically gifted students. There is a long tail in academic aptitude, and the students in this tail will often benefit from a kinder and more caring mode of learning. You do not have to be soft and woolly about this, it is a hard core educational psychology result: if you want the best for all students you need to treat them all as individuals. For some tests are great, terrific! For others tests and exams are positively harmful. You want to try and figure out who is who, at least if you are lucky to have small class sizes.
  6. For large class sizes, like at a university, do still treat all students individually. You can easily do this by offering a buffet of learning resources and modes. Do not, whatever you do, provide a single-mode style of lecture+homework+exam course. That is ancient technology, medieval. You have the Internet, use it! Gather vast numbers of resources of all different manners of approach to your subject you are teaching, then do not teach it! Let your students find their own way through all the material. This will slow down a lot of students — the ones who have been indoctrinated and trained to do only what they are told — but if you persist and insist they navigate your course themselves then they should learn deeper as a result.

Solving the “do what I am told” problem is in fact the very first job of an educator in my opinion. (For a long time I suffered from lack of a good teacher in this regard myself. I wanted to please, so I did what I was told, it seemed simple enough. But … Oh crap, … the day I found out this was holding me back, I was furious. I was about 18 at the time. Still hopelessly naïve and ill-informed about real learning.) If you achieve nothing else with a student, transitioning them from being an unquestioning sponge (or oily duck — take your pick) to being self-motivated and self-directed in their learning is the most valuable lesson you can ever give them. So give them it.

So I use a lot of tests. But not for grading. For grading I rely more on student journal portfolios. All the weekly homework sets are quizzes though, so you could criticise the fact I still use these for grading. As a percentage though, the Journals are more heavily weighted (usually 40% of the course grade). There are some downsides to all this.

  • It is fairly well established in research that grading using journals or subjective criteria is prone to bias. So unless you anonymise student work, you have a bias you need to deal with somehow before handing out final grades.
  • Grading weekly journals, even anonymously, takes a lot of time, about 15 to 20 times the hours that grading summative exams takes. So that’s a huge time commitment. So you have to use it wisely by giving very good quality early feedback to students on their journals.
  • I still haven’t found out how to test the methods easily. I would like to know quantitatively how much more effective journal portfolios are compared to exam based assessments. I am not a specialist education researcher, and I research and write a about a lot of other things, so this is taking me time to get around to answering.

I have not solved the grading problem, for now it is required by the university, so legally I have to assign grades. One subversive thing I am following up on is to refuse to submit singular grades. As a person with a physicists world-view I believe strongly in the role of sound measurement practice, and we all know a single letter grade is not a fair reflection on a student’s attainment. At a minimum a spread of grades should be given to each student, or better, a three-point summary, LQ, Median, UQ. Numerical scaled grades can then be converted into a fairer letter grade range. And GPA scores can also be given as a central measure and a spread measure.

I can imagine many students will have a large to moderate assessment spread, and so it is important to give them this measure, one in a few hundred students might statistically get very low grades by pure chance, when their potential is a lot higher. I am currently looking into research on this.

OK, so in summary: even though institutions require a lot of tests you can go around the tests and still given students a fair grade while not sacrificing the true learning opportunities that come from the principle of eternal rediscovery. Eternal rediscovery is such an important idea that I want to write an academic paper about it and present at a few conferences to get people thinking about the idea. No one will disagree with it. Some may want to refine and adjust the ideas. Some may want concrete realizations and examples. The real question is, will they go away and truly inculcate it into their teaching practices?

CCL_BY-NC-SA(https://creativecommons.org/licenses/by-nc-sa/4.0/legalcode)

*      *       *

“I’d Like Some Decoherence Sauce with that Please”

OK, last post I was a bit hasty saying Simon Saunders undermined Max Tegmark. Saunders eventually finds his way to recover a theory of probability from his favoured Many Worlds Interpretation. But I do think he over-analyses the theory of probability. Maybe he is under-analysing it too in some ways.

What the head-scratchers seem to want is a Unified Theory of Probability. Something that gives what we intuitively know is a probability but cannot mathematically formalise in a way that deals with all reality. Well, I think this is a bit of a chimera. Sure, I’d like a unified theory too. But sometimes you have to admit reality, even abstract mathematical Platonic reality, does not always present us with a unified framework for everything we can intuit.

What’s more, I think probability theorists have come pretty close to a unified framework for probability. It might seem patchwork, it might merge frequentist ideas with Bayesian ideas, but if you require consistency across domains then apply the patchwork so that on overlaps you have agreement, then I suspect (I cannot be sure) that probability theory as experts understand it today, if fairly comprehensive. Arguing that frequentism should always work is a bit like arguing that Archimedean calculus should always work. Pointing out deficiencies in Bayesian probability does not mean there is no overarching framework for probability, since where Bayesianism does not work probably frequentism, or some other combinatorics, will.

Suppose you even have to deal with a space of transfinite cardinality and there is ignorance about where you are, then I think in the future someone will come up with measures on infinite spaces of various cardinality. They might end up with something that is a bit trivial (all probabilities become 0 or 1 for transfinite measures, perhaps?), but I think someone will do it. All I’m saying is that it is way too early in the history of mathematics to say we need to throw up our hands and appeal to physics and Many Worlds.

*      *       *

That was along intro. I really meant to kick off this post with a few remarks about Max Tegmark’s second lecture at the Oxford conference series on Cosmology and Quantum Foundations. He claims to be a physicist, but puts on a philosophers hat when he claims, “I am only my atoms”. Meaning he believes consciousness arises or emerges merely from some “super-complex processes” in brains.

I like Max Tegmark, he seems like a genuinely nice guy, and is super smart. But here he is plain stupid. (I’m hyperbolising naturally, but I still think it’s dopey what he believes.)

It is one thing to say your totality is your atoms, but quite another to take consciousness as a phenomenon seriously and claim it is just physics. Especially, I think, if your interpretation of quantum reality is the MWI. Why is that? Because MWI has no subjectivity. But, if you are honest, or if you have thought seriously about consciousness at all, and what the human mind is capable of, then without being arrogant or anthropocentric, you have to admit that whatever consciousness is, (and I do not know what it is just let me say, but whatever it is) it is an intrinsically subjective phenomenon.

You can find philosophers who deny this, but most of them are just denying the subjectiveness of consciousness in order to support their pet theory of consciousness (which is often grounded in physics). So those folks have very little credibility. I am not saying consciousness cannot be explained by physics. All I am saying is that if consciousness is explained by physics then our notion of physics needs to expand to include subjective phenomena. No known theories of physics have such ingredients.

It is not like you need a Secret Sauce to explain consciousness. But whatever it is that explains consciousness, it will have subjective sauce in it.

OK, I know I can come up with a MWI rebuff. In a MWI ontology all consistent realities exist due to Everettian branching. So I get behaviour that is arbitrarily complex in some universes. In those universes am I not bound to feel conscious? In other branches of the Everett multiverse I (not me actually, but my doppelgänger really, one who branched from a former “me”) do too many dumb things to be considered consciously sentient in the end, even though up to a point they seemed pretty intelligent.

The problem with this sort of “anything goes” so that in some universe consciousness will arise, is that it is naïve or ignorant. It commits the category error of assuming behaviour equates to inner subjective states. Well, that’s wrong. Maybe in some universes behaviour maps perfectly onto subjective states, and so there is no way to prove the independent reality of subjective phenomenon. But even that is no argument against the irreducibility of consciousness. Because any conscious agent who knows of (at least) their own subjective reality, they will know their universes branch is either not all explained by physics, or physics must admit some sort of subjective phenomenon into it’s ontology.

Future philosophers might describe it as merely a matter of taste, one of definitions. But for me, I like to keep my physics objective. Ergo, for me, consciousness (at least the sort I know I have, I cannot speak for you or Max Tegmark) is subjective, at least in some aspects. It sure manifests in objective physics thanks to my brain and senses, but there is something irreducibly subjective about my sort of consciousness. And that is something objectively real physics cannot fully explain.

What irks me most though, are folks like Tegmark who claim folks like me are arrogant in thinking we have some kind of secret sauce (by this presumably he means a “soul” or “spirit” that guides conscious thought).  I think quite the converse. It is arrogant to think you can get consciousness explained by conventional physics and objective processes in brains. Height of physicalist arrogance really.

For sure, there are people who take the view human beings are special in some way, and a lot of such sentiments arise from religiosity.

But people like me come to the view that consciousness is not special, but it is irreducibly subjective.  We come to this believing in science.   But we also come without prejudices.  So, in my humble view, if consciousness involves only physics you can say it must be some kind of special physics. That’s not human arrogance. Rather, it is an honest assessment of our personal knowledge about consciousness and more importantly about what consciousness allows us to do.

To be even more stark.  When folks like Tegmark wave their hands and claim consciousness is probably just some “super complex brain process”, then I think it is fair to say that they are the ones using implicit secret sauce.  Their secret sauce is of the garden variety atoms and molecules variety of course. You can say, “well, we are ignorant and so we cannot know how consciousness can be explained using just physics”.  And that’s true.  But (a) it does not avoid the problem of subjectivity, and (b) you can be just as ignorant about whether physics is all their is to reality. Over the years I have developed sense that it is far more arrogant to think physical reality is the only reality. I’ve tried to figure out how sentient subjective consciousness, and mathematical insight, and ideal Platonic forms in my mind can be explained by pure physics. I am still ignorant. But I do strongly postulate that there has to be some element of subjective reality involved in at least my form of consciousness. I say that in all sincerity and humility. And I claim it is a lot more humble than the position of philosophers who echo Tegmark’s view on human arrogance.

Thing is, you can argue no one understands consciousness, so no one can be certain what it is, but we can be fairly certain about what it isn’t. What it is not is a purely objectively specifiable process.

A philosophical materialist can then argue that consciousness is an illusion, it is a story the brain replays to itself. I’ve heard such ideas a lot, and they seem to be very popular at preset even though Daniel Dennett and others wrote about them more than 20 years ago. And the roots of the meme “consciousness is an illusion” is probably even centuries older than that, which you can confirm if you scour the literature.

The problem is you can then clearly discern a difference in definitions. The consciousness is an illusion folks use quite a different definition of consciousness compared to more onologically open-minded philosophers.

*      *       *

On to other topics …

*      *       *

Is Decoherence Faster than Light? (… yep, probably)

There is a great sequence in Max Tegmark’s talk where he explains why decoherence of superpositions and entanglement is just about, “the fastest process in nature!” He presents an illustration with a sugar cube dissolving in a cup of coffee. The characteristic times for relevant physical processes go as follows,

  1. Fluctuations — changes in correlations between clusters of molecules.
  2. Dissipation — time for about half the energy added by the sugar to be turned into heat. Scales by roughly the number of molecules in the sugar, so it takes on the order of N collisions on average.
  3. Dynamics — changes in energy.
  4. Information — changes in entropy.
  5. Decoherence — takes only one collision. So about 1025 times faster than dissipation.

(I’m just repeating this with no independent checks, but this seems about right.)

This also gives a nice characterisation of classical versus quantum regimes:

  1. Mostly Classical — when τdeco≪τdyn≤τdyn.
  2. Mostly Quantum — when τdyn≪τdeco, τdiss.

See if you can figure out why this is a good characterisation of regimes?

Here’s a screenshot of Tegmark’s characterisations:

quanta_decoherencetimes_vs_dissipationtime

The explanation is that in a quantum regime you have entanglement and superposition, uncertainty is high, dynamics evolves without any change in information, and hence also with essentially no dissipation. Classically you get a disturbance in the quantum and all coherence is lost almost instantaneously, and yeah, it goes faster than light because with decoherence nothing physical is “going” it is a not a process, rather decoherence refers to a state of possible knowledge, and that can change instantaneously without any signal transfer, at least according to some theories like MWI or Copenhagen.

I should say that in some models decoherence is a physically mediated process, and in such theories it would take a finite time, but it is still fast. Such environmental decoherence is a feature of gravitational collapse theories for example. Also, the ER=EPR mechanism of entanglement would have decoherence mediated by wormhole destruction, which is probably something that can appear to happen instantaneously from the point of view of certain observers. But the actual snapping of a wormhole bridge is not a faster than light process.

I also liked Tegmark’s remark that,

“We realise the reason that big things tend to look classical isn’t because they are big, it’s just because big things tend to be harder to isolate.”

*      *       *

And in case you got the wrong impression earlier, I really do like Tegmark. In his sugar cube in coffee example his faint Swedish accent gives way for a second to a Feynmanesque “cawffee”. It’s funny. Until you here it you don’t realise that very few physicists actually have a Feynman accent. It’s cool Tegmark has a little bit of it, and maybe not surprising as he often cites Feynman as one of his heroes (ah, yeah, what physicist wouldn’t? Well, actually I do know a couple who think Feynman was a terrible influence on physics teaching, believe it or not! They mean well, but are misguided of course! ☻).

*      *       *

The Mind’s Role Play

Next up: Tegmark’s take on explaining the low entropy of our early universe. This is good stuff.

Background: Penrose and Carroll have critiqued Inflationary Big Bang cosmology for not providing an account for why there is an arrow of time, i.e., why did the universe start in an extremely low entropy state.

(I have not seen Carroll’s talk, but I think it is on my playlist. So maybe I’ll write about it later.) But I am familiar with Penrose’s ideas. Penrose takes a fairly conservative position. He takes the Second Law of Thermodynamics seriously. He cannot see how even the Weyl Curvature Hypothesis explains the low entropy Big Bang. (I think WCH is just a description, not an explanation.)

Penrose does have a few ideas abut how to explain things with his Conformal Cyclic Cosmology ideas. I find them hugely appealing. But I will not discuss them here. Just go read his book.

What I want to write about here is Tegmark and his Subject-Object-Environment troika. In particular, why does he need to bring the mind and observation into the picture? I think he could give his talk and get across all the essentials without mentioning the mind.

But here is my problem. I just do not quite understand how Tegmark goes from the correct position on entropy, which is that is is a coarse graining concept, to his observer-measurement dependence. I must be missing something in his chain of reasoning.

So first: entropy is classically a measure of the multiplicity of a system, i.e., how many microstates in an ensemble are compatible with a given macroscopic state. And there is a suitable generalisation to quantum physics given by von Neumann.

If you fine grain enough then most possible states of the universe are unique and so entropy measured on such scales is extremely low. Basically, you only pick up contributions from degenerate states. Classically this entropy never really changes, because classically an observer is irrelevant. Now, substitute for “Observer” the more general “any process that results in decoherence”. Then you get a reason why quantum mechanically entropy can decrease. To whit: in a superposition there are many states compatible with prior history. When a measurement is made (for “measurement” read, “any process resulting in decoherence”) then entropy naturally will decrease on average (except for perhaps some unusual highly atypical cases).

Here’s what I am missing. All that I just said previously is local. Whereas, for the universe as a whole, globally, what is decoherence? It is not defined. and so what is global entropy then? There is no “observer” (read: “measurement process”) that collapses or decohere’s our whole universe. At least none we know of. So it all seems nonsense to talk about entropy on a cosmological scale.

To me, perhaps terribly naïvely, there is a meaning for entropy within a universe in localised sub-systems where observations can in principle be made on the system. “Counting states” to put it crudely. But for the universe (or Multiverse if you prefer) taken as a whole, what meaning is there to the concept of entropy? I would submit there is no meaning to entropy globally. The Second Law triumphs right? I mean, for a closed isolated system you cannot collapse states and get decoherence, at least not from without, so it just evolves unitarily with constant entropy as far as external observers can tell, or if you coarse grain into ensembles then the Second Law emerges, on average, even for unitary time evolution.

Perhaps what Tegmark was on about was that if you have external observer disruptions then entropy reduces (you get information about the state). But does this not globally just increase entropy since globally now the observer’s system is entangled with the previously closed and isolated system. But who ever bothers to compute this global entropy? My guess is it would obey the Second Law. I have no proof, just my guess.

Of course, with such thoughts in my head it was hard to focus on what Tegmark was really saying, but in the end his lecture seems fairly simple. Inflation introduces decoherence and hences lowers quantum mechanical entropy. So if you do not worry about classical entropy, just focus on the quantum states, then apparently inflationary cosmology can “explain” the low entropy Big Bang.

Only, if you ask me, this is no explanation. It is just “yet another” push-back. Because Inflationary cosmology is incomplete, it does not deal with the pre-inflationary universe. In other words, the pre-inflationary universe has to also have some entropy if you are going to be consistent with taking Tegmarks’ side. So however much inflation reduces entropy, you still have the initial pre-inflationary entropy to account for, which now becomes the new “ultimate source” of or arrow of time. Maybe it has helped to push the unexplained entropy a lot higher? But then you get into the realm of, “what is ‘low’ entropy in cosmological terms?” What does it mean to say the unexplained pre-inflationary entropy is high enough to not worry about? I dunno’. Maybe Tegmark is right? Maybe pre-inflation entropy (disorder) is so high by some sort of objectively observer independent measure (is that possible?) that you literally no longer have to fret about the origin of the arrow of time? Maybe inflation just wipes out all disorder and gives us a proverbial blank slate?

But then I do fret about it. Doesn’t Penrose come in at this point and give baby Tegmark a lesson in what inflation can and cannot do to entropy? Good gosh! It’s just about enough confusion to drive one towards the cosmological anthropic principle out of desperation for closure.

So despite Tegmark’s entertaining and informative lecture, I still don’t think anyone other than Penrose has ever given a no-push-back argument for the arrow of time. I guess I’ll have to watch Tegmark’s talk again, or read a paper on it for greater clarity and brevity.


CCL_BY-NC-SA(https://creativecommons.org/licenses/by-nc-sa/4.0/legalcode)