Stephen Hawking - will AI kill or save humankind?
- Published
Two years ago Stephen Hawking told the BBC that the development of full artificial intelligence, could spell the end of the human race.
His was not the only voice warning of the dangers of AI - Elon Musk, Bill Gates and Steve Wozniak also expressed their concerns about where the technology was heading - though Professor Hawking's was the most apocalyptic vision of a world where robots decide they don't need us any more.
What all of these prophets of AI doom wanted to do was to get the world thinking about where the science was heading - and make sure other voices joined the scientists in that debate.
That they have achieved that aim was evident on Wednesday night at an event in Cambridge marking the opening of the Centre for the Future of Intelligence, designed to do some of that thinking about the implications of AI.
And Professor Hawking was there to help launch the centre. "I'm glad someone was listening," he told the audience.
In a short speech, he outlined the potential and the pitfalls of the technology in his usual vivid language. He reviewed the recent rapid progress in areas like self-driving cars and the triumph of Google's DeepMind in the game of Go - and predicted further advances.
"I believe there is no deep difference between what can be achieved by a biological brain and what can be achieved by a computer. It therefore follows that computers can, in theory, emulate human intelligence — and exceed it."
That, he said, could lead to the eradication of disease and poverty and the conquest of climate change. But it could also bring us all sorts of things we didn't like - autonomous weapons, economic disruption and machines that developed a will of their own, in conflict with humanity.
"In short, the rise of powerful AI will be either the best, or the worst thing, ever to happen to humanity. We do not yet know which."
So, an easy enough mission for the Centre for the Study of Intelligence - just find out whether AI is going to kill us or not.
Actually the multi-disciplinary centre, which brings together philosophers, psychologists, lawyers and computer scientists, will have a rather more practical programme of research.
Long before the robots decide whether we are surplus to requirements, we are for instance going to need to think about issues such as whether autonomous vehicles should be programmed to protect pedestrians or passengers.
Another speaker at the event was Professor Maggie Boden, a major figure in artificial intelligence research for more than 50 years.
She told me she had long seen the need for the debate we are having now - but she was not worrying about our imminent extinction and was rather less convinced than Professor Hawking that we were heading into the AI future at breakneck speed.
Her concern was about the impact of automation right now - in Japan at least - on elderly people. She pointed to the enthusiasm for the use of robots in the care of the elderly and sick and said society would have to ask whether this was dehumanising. "I'm scared of that," she said.
After decades of research into AI, Professor Boden still does not see robots replacing humans in functions which require empathy and emotional intelligence. Artificial intelligence could soon offer governments the chance to cut growing bills for social care - but at a cost for those in need of help.
Just one of the issues which will now be addressed by the Centre for the Future of Intelligence - and rather more urgent than the threat from some future Terminator.
Read more of the BBC's AI coverage here.
- Published28 September 2015
- Published12 December 2015
- Published29 July 2015
- Published2 December 2014