AI program teaches itself to win computer games

Researchers working for Google in London say they have developed an artificial intelligence system that has taught itself how to win 1980s computer games.

The computer program, which is inspired by the human brain, learned how to play 49 classic Atari games. In half, it was better than a professional human player.

Google's DeepMind team said this was the first time a system had learned how to master a wide range of complex tasks.

The study is published in the journal Nature.

Dr Demis Hassabis, DeepMind's vice-president of engineering, showed the BBC's Pallab Ghosh how the AI had taught itself to excel at the classic Breakout.