Out of control AI will not kill us, believes Microsoft Research chief
- Published
A Microsoft Research chief has said he thinks artificial intelligence systems could achieve consciousness, but has played down the threat to human life.
Eric Horvitz's position contrasts with that of several other leading thinkers.
Last December, Prof Stephen Hawking told the BBC that such machines could "spell the end of the human race".
Mr Horvitz also revealed that "over a quarter of all attention and resources" at his research unit were now focused on AI-related activities.
"There have been concerns about the long-term prospect that we lose control of certain kinds of intelligences," he said.
"I fundamentally don't think that's going to happen.
"I think that we will be very proactive in terms of how we field AI systems, and that in the end we'll be able to get incredible benefits from machine intelligence in all realms of life, from science to education to economics to daily life."
Mr Horvitz heads up a team of scientists and engineers at Microsoft Research's main lab at its parent company's Redmond headquarters.
The division's work on AI has already helped give rise to Cortana - a voice-controlled virtual assistant that runs on the Windows Phone platform and will shortly come to desktop PCs when Windows 10 is released.
Mr Horvitz said that he believed Cortana and its rivals would spur on development of the field.
"The next if not last enduring competitive battlefield among major IT companies will be artificial intelligence," he said.
"The notion that systems that can think, listen, hear, collect data from thousands of user experiences - and we synthesise it back to enhance its services over time - has come to the forefront now.
"We have Cortana and Siri and Google Now setting up a competitive tournament for where's the best intelligent assistant going to come from... and that kind of competition is going to heat up the research and investment, and bring it more into the spotlight."
'Existential threat'
Mr Horvitz's comments were posted online in a video marking his receipt, external of the AAAI Feigenbaum Prize - an award for "outstanding advances" in AI research.
But while the Microsoft executive describes himself as being "optimistic" about how humans might live alongside artificial intelligences, others are more cautious.
The physicist Prof Hawking has warned that conscious machines would develop at an ever-increasing rate once they began to redesign themselves.
"Humans, who are limited by slow biological evolution, couldn't compete, and would be superseded," he said.
Elon Musk - chief executive of car firm Tesla and rocket-maker SpaceX - has also suggested AI poses the greatest "existential threat" humankind faces.
"With artificial intelligence, we are summoning the demon," he told an audience of students in October.
"In all those stories where there's the guy with the pentagram and the holy water, it's like yeah he's sure he can control the demon. Didn't work out."
The Spectrum computer's inventor Sir Clive Sinclair has gone even further, saying he believes it is unavoidable that artificial intelligences will wipe out mankind.
"Once you start to make machines that are rivalling and surpassing humans with intelligence, it's going to be very difficult for us to survive," he told the BBC. "It's just an inevitability."
Several recent and forthcoming films have also focused on how people might handle the potential threat AI poses, including Ex Machina, Transcendence, Avengers: Age of Ultron, Chappie and Terminator Genisys.
Perhaps unsurprisingly, Mr Horvitz voiced a preference for 2014's Her, charting the relationship of a flirtatious Cortana-like app and its owner.
Privacy fears
He did, however, acknowledge one concern: AI systems risk invading people's privacy, since they will become capable of making ever-deeper inferences about users by "weaving together" the mass of data generated by human activities.
But, he added, AI itself might offer a solution to this problem.
"We've been working with systems that can figure out exactly what information they would best need to provide the best service for a population of users, and at the same time then limit the [privacy] incursion on any particular user," he said.
"You might be told, for example, in using this service you have a one in 10,000 chance of having a query ever looked at... each person only has to worry about as much as they worry about being hit by a bolt of lightning, it's so rare.
"So, I believe that machine learning, reasoning and AI more generally will be central in providing great tools for ensuring the privacy of folks at the same time as allowing services to acquire data anonymously or with only low probabilities of risk to any particular person."
- Published5 December 2014
- Published2 December 2014
- Published2 December 2014
- Published13 February 2014