DeepMind AI to play videogame to learn about world

  • Published
Scene from StarCraft IIImage source, Blizzard
Image caption,

How will an AI cope with an alien landscape?

Google's DeepMind is teaming up with the makers of the StarCraft video game to train its artificial intelligence systems.

The AI systems "playing" the game will need to learn strategies similar to those that humans need in the real world, DeepMind said.

Its ultimate aim is to develop artificial intelligence that could solve any problem.

It has previously taught algorithms to play a range of Atari computer games.

Image source, Blizzard
Image caption,

StarCraft II was an early pioneer in e-sports, with many elite players

StarCraft II, made by developer Blizzard, is a real-time strategy game in which players control one of three warring factions - humans, the insect-like Zerg, or aliens known as the Protoss. Players' actions are governed by the in-game economy, and minerals and gas must be gathered in order to produce new buildings and units.

Each player can only see parts of the map within range of their own units and must send units to scout unseen areas in order to gain information about their opponents.

In a blog post, external, Oriol Vinyals, a research scientist at DeepMind said: "DeepMind is on a scientific mission to push the boundaries of AI, developing programs that can learn to solve any complex problem without needing to be told how."

Image source, Blizzard
Image caption,

The game is more complex than Go, which DeepMind previously developed an algorithm to win

"StarCraft is an interesting testing environment for current AI research because it provides a useful bridge to the messiness of the real world.

"The skills required for an agent to progress through the environment and play StarCraft well could ultimately transfer to real-world tasks."

Prof Yoshua Bengio, head of the Institute for Learning Algorithms at the University of Montreal, told the BBC: "It is a much more complex game than games previously studied by AI researchers, like the Atari games or even the game of Go."

DeepMind famously developed an algorithm that could play the complex game of Go and beat one of the world's best players.

"Progress on this game could translate in many areas," said Prof Bengio. "One in which I am particularly interested in is natural language dialogue."

The game will be opened up to other AI researchers next year. DeepMind said it hoped that the new environment would be "widely used to advance the state-of-the-art" field of artificial intelligence.