How safe can artificial intelligence be?

Robot
Image caption,

RoboSimian is much like a mechanical monkey that can morph between different postures

If Hollywood movies are your only guide to Artificial Intelligence, we face a terrifying future in which machines become so clever that they dominate or even destroy us.

And influential figures have added fuel to the fire: Stephen Hawking says AI could spell the end of the human race while the genius entrepreneur Elon Musk says it is "like summoning the demon".

So, does this make conquest by computer inevitable?

With such a heated subject, it's worth trying to disentangle what's plausible from what's too far-fetched to worry about.

For a start, we live with AI already. The calculations behind your Google searches or your browsing on Amazon are not just ticking over - the software is constantly learning how to respond more rapidly and usefully.

This is remarkable but is described as "narrow" or "weak" AI because it can only work within the guidelines it's been given by its human inventors, a crucial limitation.

By contrast, "general" or "strong" AI - which does not exist yet - implies a more assertive ability to do things that go beyond the original human intentions, not to "think" but to improvise.

Huge obstacles stand in the way of getting there, either by mimicking how a human brain works or building sufficient processing power from scratch, let alone creating a robot with its own ideas and agendas.

For a reality check, I visited Nasa's Jet Propulsion Laboratory (JPL) in Pasadena, California, to see engineers working on some of the most capable robots in the world.

They laughed at the notion of a robot army someday taking over - "I am not concerned about intelligent machines", said project leader Brett Kennedy.

His team's RoboSimian is an unnerving version of a mechanical monkey that can morph between different postures so it can either stand or crawl or roll along on wheels.

Designed to venture into disaster zones too dangerous for people to enter, such as collapsed buildings or ruined nuclear reactors, it has two computers on board, one to govern its sensors, the other to handle movements. Able to carry out tasks like driving a car and turning off a large valve, it came a creditable fifth in the Pentagon's recent Robotics Challenge, external.

But RoboSimian's actual intelligence is rudimentary. I watched it being instructed to open a door and saw it advance in the right direction and then judge how far its arm needs to move to push the handle. But the machine does need to be given very specific parameters.

Intelligent Machines - a BBC News series looking at AI and robotics

As the robot hummed beside us, Brett Kennedy said: "For the foreseeable future I am not concerned nor do I expect to see a robot as intelligent as a human. I have first-hand knowledge of how hard it is for us to make a robot that does much of anything."

To anyone worried about AI, this would be reassuring, and is backed up by one of Britain's leading figures in AI, Prof Alan Winfield of the Bristol Robotics Lab.

He has consistently offered a voice of calm, telling me that "fears of future super intelligence - robots taking over the world - are greatly exaggerated".

He concedes that innovations should be carefully handled - and he was among 1,000 scientists and engineers who signed an appeal for a ban on AI in weaponry, external.

Prof Winfield said: "Robots and intelligent systems must be engineered to very high standards of safety for exactly the same reasons that we need our washing machines, cars and airplanes to be safe."

But predicting the future pace of technology is impossible, as is being certain about whether every researcher in every part of the world will take a responsible approach - and therein lies the threat.

Image source, Science Photo Library
Image caption,

Predicting the future pace of technology is impossible, as is being certain about whether every researcher will take a responsible approach

The key and most momentous milestone - human-machine parity - is called Artificial General Intelligence, and academics are trying to assess when that might arrive and what it would mean.

One is Prof Nick Bostrom of Oxford University's Future of Humanity Institute. His recent book, Superintelligence, has become one of the definitive texts laying out very clearly why we need to worry.

He quotes recent surveys of experts in the field. One suggests that there's a 50% chance that computers could reach human-level intelligence as soon as 2050 - just 35 years away.

And looking further ahead, the same study says there's a 90% chance of machine-human parity by 2075.

Prof Bostrom describes himself as a supporter of AI - because it could help tackle climate change, energy and new medicines - but he says it does have implications that are not properly understood.

"You have to think of AI not as just one more cool gadget or one little thing that will improve the bottom line of some corporation but really as a fundamental game changer for humanity - the last invention that human intelligence will ever need to make, the beginning of the machine intelligence era."

Image source, Reuters
Image caption,

Some researchers fear a creeping takeover as we gradually hand over more responsibilities to technology

He conjures up a compelling image of mankind behaving like a curious child who has picked up an unexploded bomb without realising the dangers.

"Maybe it is decades away but we are just as immature and naïve as this child. We really don't realise the power of this thing we are creating.

"That's the situation we are in as a species."

Prof Bostrom is now receiving funding from Elon Musk to explore these issues, and the aim is to develop a shared approach to safety.

So what about a scenario where the technology is unstoppable but the scariest scenarios - of robot destroyers - are somehow evaded because the right steps are taken in advance?

Another quieter, less obvious form of takeover may still be possible. In his book, Humans Need Not Apply, Prof Jerry Kaplan of Stanford University outlines how what starts with Amazon, building up a picture of what you're likely to buy, soon multiplies, "silently and unnoticed", so you'll be surrounded by Amazons in all aspects of your life.

"As we learn to trust these systems to transport us, introduce us to potential mates, customise our news, protect our property, monitor our environment, grow, prepare and serve our food, teach our children, and care for our elderly, it will be easy to miss the bigger picture."

Ultimately, there are risks, no doubt. The question is whether the right safeguards can be built in, and soon enough.