Google achieves AI 'breakthrough' by beating Go champion

  • Published
Media caption,

Google's DeepMind division beat the European Go champion in October

A Google artificial intelligence program has beaten the European champion of the board game Go.

The Chinese game is viewed as a much tougher challenge than chess for computers because there are many more ways a Go match can play out.

The tech company's DeepMind division said its software had beaten its human rival, external five games to nil.

One independent expert called it a breakthrough for AI with potentially far-reaching consequences.

The achievement was announced to coincide with the publication of a paper, in the scientific journal Nature, detailing the techniques used.

Earlier on Wednesday, Facebook's chief executive had said its own AI project, external had been "getting close" to beating humans at Go.

But the research, external he referred to indicated its software was ranked only as an "advanced amateur" and not a "professional level" player.

What is Go?

Media caption,

A brief guide to Go

Go is thought to date back to ancient China, several thousand years ago.

Using black-and-white stones on a grid, players gain the upper hand by surrounding their opponents pieces with their own.

The rules are simpler than those of chess, but a player typically has a choice of 200 moves compared with about 20 in chess.

There are more possible positions in Go than atoms in the universe, according to DeepMind's team.

It can be very difficult to determine who is winning, and many of the top human players rely on instinct.

DeepMind's chief executive, Demis Hassabis, said its AlphaGo software followed a three-stage process, which began with making it analyse 30 million moves from games played by humans.

"It starts off by looking at professional games," he said.

Media caption,

Demis Hassabis explains how DeepMind achieved the computing milestone.

"It learns what patterns generally occur - what sort are good and what sort are bad. If you like, that's the part of the program that learns the intuitive part of Go.

"It now plays different versions of itself millions and millions of times, and each time it gets incrementally better. It learns from its mistakes.

"The final step is known as the Monte Carlo Tree Search, which is really the planning stage.

"Now it has all the intuitive knowledge about which positions are good in Go, it can make long-range plans."

Tested against rival Go-playing AIs, Google's system won 499 out of 500 matches,

And last October, DeepMind invited Fan Hui, Europe's top player, to its London office for a series of games, each of which the AI won.

"Many of the best programmers in the world were asked last year how long it would take for a program to beat a top professional, and most of them were predicting 10-plus years," Mr Hassabis said.

"The reasons it was quicker than people expected was the pace of the innovation going on with the underlying algorithms and also how much more potential you can get by combining different algorithms together."

Image source, Thinkstock
Image caption,

DeepMind played with a full-sized board of 19 rows and 19 columns

'Major breakthrough'

Prof Zoubin Ghahramani, of the University of Cambridge, said: "This is certainly a major breakthrough for AI, with wider implications.

"The technical idea that underlies it is the idea of reinforcement learning - getting computers to learn to improve their behaviour to achieve goals.

"That could be used for decision-making problems - to help doctors make treatment plans, for example, in businesses or anywhere where you'd like to have computers assist humans in decision making.

"It doesn't mean that Google is ahead of all other companies in AI - there are many artificial intelligences.

"But in terms of devoting resources to Go, Google has clearly done more.

"Facebook has achieved some pretty spectacular results in other areas of artificial intelligence, but I think Google has beaten them to this particularly important challenge."

Computer games

DeepMind now intends to pit AlphaGo against Lee Sedol - the world's top Go player - in Seoul in March.

Image source, Google
Image caption,

One of DeepMind's AI programs taught itself how to play the video game Breakout

In addition, it continues to develop AI systems that can play computer games without any help, following last year's success at getting its bots to teach themselves how to play several dozen classics.

"For us, Go is the pinnacle of board game challenges," said Mr Hassabis.

"Now, we are moving towards 3D games or simulations that are much more like the real world rather than the Atari games we tackled last year. "

Related internet links

The BBC is not responsible for the content of external sites.