Machine Learning

Google’s DeepMind AI Now Capable of ‘Deep Neural Reasoning’

11 Nov 2016 10:32am, by

The development of deep learning artificial neural networks has grown by leaps and bounds in the last few years. These biologically inspired computational models, loosely based on how the human brain functions, has so far enabled machines to accomplish tasks once thought as the sole purview of humans. Previously, we’ve witnessed artificially intelligent neural networks that can create music in a certain artistic style, hallucinate up psychedelic landscapes from a stock image, and even beat a human champion at one of the most complex games ever invented. Deep learning elements already underlie many of the daily technologies we use, from search engines to speech recognition and translation.

But DeepMind, Google’s artificial intelligence research lab, has now gone yet another step further. Having created the AlphaGo software that was able to master the complex board game known as Go, DeepMind has now made what they are calling a “memory-augmented neural network” that uses a kind of external “working memory” that helps it to learn how to complete complex tasks on its own, using human-inspired memory and reasoning, instead of being programmed to do so.

This new, hybrid architecture is called a Differentiable Neural Computer or DNC, combines a neural network with the advantage of memory storage that’s external to the network. It works in a similar fashion to how a computer’s random access memory (RAM) would function — which boosts the “reasoning” power of the model overall.

“Neural networks excel at pattern recognition and quick, reactive decision-making, but we are only just beginning to build neural networks that can think slowly — that is, deliberate or reason using knowledge,” explained the DeepMind researchers in a recent blog post. “These [DNC] models… can learn from examples like neural networks, but they can also store complex data like computers.”

The researchers then set out to test the DNCs on problems that would require it to construct temporary data structures and using these organized bits of knowledge to resolve those problems, similar to a kind of “rational reasoning” for AI.

This is in contrast to traditional neural nets which may need to have the same training data “fed” into it multiple times, or be specifically programmed for it to accomplish the same job. According to the researchers’ findings published in Nature and titled “Hybrid computing using a neural network with dynamic external memory,” this memory-assisted, “deep neural reasoning” enabled the DNC to successfully complete a number of tasks that a standalone neural network would perform poorly at.

The DNC was trainted using randomly produced "graphs" (left). After training, it was tasked with navigating the London subway system (right), either finding a path of any length, as well as the shortest path between two stations.

The DNC was trained using randomly produced “graphs” (left). After training, it was tasked with navigating the London subway system (right), either finding a path of any length, as well as the shortest path between two stations.

Navigating Subways and Family Relations

For instance, one task involved getting the system to learn how to navigate the underground subways of London — a relatively difficult feat that would require the AI to establish complex connections and relationships between data points. But the researchers found that when the DNC was tasked with finding its way from one point to another on the London Underground, it performed with an average accuracy of 98.8 percent, compared to 37 percent with an unassisted neural network trained on almost two million examples.

In yet another exercise, the DNC had to deduce the relationships within a family tree, when given only the parent, child, and sibling relationships with the family. As you can see in DeepMind’s video below, the DNC arrives at its solutions step-by-step, storing away learned information in memory, which it could draw upon when faced with solving another aspect of the puzzle.

Learning How to Use Memory

So what differentiates DNCs from their conventional brethren? At the center of the DNC is a controller that acts like the processor of a computer. The controller’s job is to receive inputs, reading from and writing to memory, and generating outputs. The neural network has latitude in “choosing” whether to commit something to memory or not and where to write it. When information is written, its location is connected to other bits of data by time-stamped “links of association,” which indicate the chronological order of which data points were stored. This allows the controller to go back and recall stored information by location or by time, which results in DNCs being able to choose how memory is allocated, where information is stored and how it can be found, and as time passes, the DNC gets better and better at recall and connecting separate bits of data.

google-deepmind-differentiable-neural-computer-2

“[D]ifferentiable neural computers learn how to use memory and how to produce answers completely from scratch,” explained the team. “They learn to do so using the magic of optimization: when a DNC produces an answer, we compare the answer to a desired correct answer. Over time, the controller learns to produce answers that are closer and closer to the correct answer. In the process, it figures out how to use its memory.”

Now that the DNC has been tested in these preliminary exercises, the next step would be to scale up the DNC’s memory capacity in order to handle real-world data.

“A flexible, extensible DNC-style working memory might allow deep learning to expand into big-data applications that have a rational reasoning component, such as generating video commentaries or semantic text analysis,” commented Herbert Jaeger, a professor of computational science at Germany’s Jacobs University Bremen. But Jaeger notes there is a bigger picture to behold: “The DNC is just one among dozens of novel, highly potent, and cleverly thought-out neural learning systems that are popping up all over the place.”

Even so, this new design does take us closer to more intelligent machines that can learn how to learn and solve any number of general problems, rather than relying on brittle, so-called “weak AI” that’s been pre-programmed to complete a single, specific task. Machines with a stronger artificial general intelligence (AGI) would be capable of tackling cognitive tasks as well as any human, which would mean our digital assistants, cars and collaborative robots would be much more responsive, smarter and potentially human-like, to interact with.

Images: Google.

A newsletter digest of the week’s most important stories & analyses.

View / Add Comments

Please stay on topic and be respectful of others. Review our Terms of Use.