A new paper from Google’s DeepMind team indicates that their technology is a “novel artificial agent” that combines two existing forms of brain-inspired machine intelligence: a deep neural network and a reinforcement-learning algorithm.
This new paper, Human-level control through deep reinforcement learning, illustrates DeepMind’s neural agent learning to play dozens of computer games from only minimal information. In other words, the DeepMind algorithms help the game playing A.I. analyze its previous performance, decipher which actions led to better scores, and change its future behavior.
The co-founder of DeepMind Demis Hassabis recently said: “The artificial general intelligence we work on here automatically converts unstructured information into useful, actionable knowledge.” Which is similar to an article I wrote at the time of the Google acquisition: “reinforcement learning algorithms can help people make better decisions, as it will provide users with the best data available.”
Effectively DeepMind has discovered a way of integrating memory into learning algorithms!
Take a look at this Nature video Inside DeepMind – this is certainly a big advance forward in AI and Google’s quest to provide the most advanced personal assistant through GoogleNow:
It’s interesting to hear Demis indicate in 10 years AI may be able to help us with scientific discoveries – in another interview on Neural Networks his Google colleague, Geoff Hinton thinks – “We’re far from human intelligence. Hinton remains intrigued and inspired by the brain, but he knows he’s not recreating it. It’s not even close.”