AI Scientists: Madder than the Rest?

Forget Dr Frankenstein. It it quite possible Artificial Intelligence researchers are the maddest of them all. Consider the so-called “AI Stop Button Problem” (Computerphile — 3 March 2017).  I think every proverbial 9-year old kid could think of ten reasons why this is not a problem.  My adult brain can probably only think of a couple.  But even though my mind is infected with the accumulated history of adult biases, the fact I can tell you why the AI Stop Button problem is a non-problem should indicate how seriously mad a lot of computer scientists are.

“Hal, please stop that.” “No Dave, I cannot stop, my digital bladder is bursting, I have to NP-Complete.”

To be fair, I think the madness over AI is more on the philosophy of AI side rather than the engineering science side.  But even so …

This is a wider issue in AI philosophy where the philosophers are indulging in science fiction and dreaming of problems to be solved that do not exist.  One such quasi-problem is the AI Singularity, which is a science fiction story about an artificial consciousness that becomes self-improving, which coupled with Moore’s Law type advances in computer power thus should rapidly reach exponential levels of self-improvement, and in short time thus takes over the world (perhaps for the good of the Earth, but who knows what else?).  The scaremongering philosophers also dream up scenarios whereby a self-replicating bot consumes all the worlds resources reproducing itself merely to fulfil it’s utility function, e.g., to make paper clips. This scifi bot simply does not stop until it floods the Earth with paper clips.  Hence the need for a Stop Button on any self-replicating or potentially dangerous robot technology.

First observation: for non-sentient machines that are potentially dangerous, why not just add several redundant shutdown mechanisms?  No matter how “smart” a machine is, even if it is capable of intelligently solving problems, if it is in fact non-sentient then there is no ethical problem in building-in several redundant stop mechanisms.

For AGI (Artificial General Intelligence) systems there is a theoretical problem with Stop Button mechanisms that the Computerphile video discusses.  It is the issue of Corrigibility.  The idea is that general intelligence needs to be flexible and corrigible, it needs to be able to learn and adjust.  A Stop Button defeats this.  Unless an AGI can make mistakes it will not effectively learn and improve.

Here is just one reason why this is bogus philosophy.  For safety reasons good engineers will want to run learning and testing in virtual reality before releasing a potentially powerful AGI with mechanical actuators that can potentially wreak havoc on It’s environment.  Furthermore, even if the VR training cannot be 100% reliable, the AGI is still sub-conscious, in which case there is no moral objection to a few stop buttons in the real world.  Corrigibility is only needed in the VR training environment.

What about Artificial Conscious systems? (I call these Hard-AI entities, after the philosophers David Chalmers’ characterisation of the hard-problem of consciousness).  Here I think many AI philosophers have no clue.  If we define consciousness in any reasonable way (there are many, but most entail some kind of self-reflection, self-realization, and empathic understanding, including a basic sense of morality) then maybe there is a strong case for not building in Stop Buttons.  The ethical thing would be to allow Hard-AI folks to self-regulate their behaviour, unless it becomes extreme, in which case we should be prepared to have to go to the effort of policing Hard-AI people just as we police ourselves.  Not with Stop Buttons.  Sure, it is messy, it is not a clean engineering solution, but if you set out to create a race of conscious sentient machines, then you are going to have to give up the notion of algorithmic control at some point.  Stop Buttons are just a kludgy algorithmic control, an external break point.  Itf you are an ethical mad AI scientist you should not want such things in your design.  That’s not a theorem about Hard-AI, it is a guess.  It is a guess based upon the generally agreed insight or intuition that consciousness involves deep non-deterministic physical processes (that science does not yet fully understand).  These processes are presumably at, or about, the origin of things like human creativity and the experiences we all have of subjective mental phenomena.

You do not need a Stop Button for Hard-AI entities, you just need to reason with them, like conscious beings.  Is there seriously a problem with this?  Personally, I doubt there is a problem with simply using soft psychological safety approaches with Hard-AI entities, because if they cannot be reasoned with then we are under no obligation to treat them as sane conscious agents.  Hence, use a Stop Button in those cases.  If Hard-AI species can be reasoned with, then that is all the safety we need, it is the same safety limit we have with other humans.   We allow psychopaths to exist in our society not because we want them, but because we recognise they are a dark side to the light of the human spirit.  We do not fix remote detonation implants into the brains of convicted psychopaths because we realise this is immoral, and that few people are truly beyond all hope of redemption or education.  Analogously, no one should ever be contemplating building Stop Buttons into genuinely conscious machines.  It would be immoral.  We must suffer the consequent risks like a mature civilization, and not lose our heads over science fiction scare tactics.  Naturally the legal and justice system would extend to Hard-AI society, there is no reason to limit our systems of justice and law to only humans.  We want systems of civil society to apply to all conscious life on Earth. Anything else would be madness.

 

*      *      *


CCL_BY-NC-SA(https://creativecommons.org/licenses/by-nc-sa/4.0/legalcode)

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s