Machines ‘Learn to Learn’ with New Algorithm

From your smartphone’s intelligent personal assistant, to personalized search engines and music recommendation services that can adapt to your preferences over time, many rudimentary forms of artificial intelligence now permeate our daily lives.
Yet, the machine learning algorithms behind these applications often require large amounts of data — anywhere from tens to hundreds or even thousands of examples — to learn a new concept. Compare that with a five-year-old human child, who can effortlessly learn concepts like what a “bird” or the letter “A” generally looks like, when only shown a few examples.
Machines still have some ways to go when it comes to learning as quickly and flexibly as humans. But in a major breakthrough over current deep learning models, scientists from MIT, New York University and University of Toronto recently unveiled a new algorithm that helps machines emulate human-like learning capabilities, allowing them to recognize, draw and even create new handwritten visual concepts after being shown only a limited number of examples.
https://youtu.be/shT-dFKU2WA
Probabilistic program learning
It’s the human ability to generalize, now made possible for machines. In their paper published in the journal Science, the researchers describe their “Bayesian Program Learning” (BPL) framework, where broad concepts are reduced into simple, probabilistic programs. These simple programs become fundamental building blocks or “primitives” that will construct new programs, capable of expressing even more complex representations, but needing much less data to do so than standard machine learning models. The paper describes it as a “generative model that can sample new types of concepts (an “A,” “B,” etc.) by combining parts and subparts in new ways,” and that can also parse the relationships between those parts. Essentially, machines can “learn to learn” with this algorithm, by building upon previous knowledge to accelerate the learning of new concepts.
“It has been very difficult to build machines that require as little data as humans when learning a new concept,” said Ruslan Salakhutdinov, one of the study’s authors and an assistant professor in the departments of computer science and statistical sciences at the University of Toronto. “Replicating these abilities is an exciting area of research connecting machine learning, statistics, computer vision, and cognitive science.”
Written by a machine or human?
The researchers used 1,600 handwritten characters culled from 50 alphabets from around the world, as visual concepts to test their model. Rather than needing dozens of examples to train it, a computer using the BPL algorithm was able to reproduce relatively faithful versions of these characters, after being shown only one example. Rather than presenting the visual concepts as collections of pixels or features, here the data was broken down to constituent parts, expressed as a simpler programs, allowing the model to imitate the way humans drew these symbols, like stroke order and direction. The algorithm also allowed the computer to ‘generalize’ enough to create new examples in styles similar to the alphabets it had previously learned.
The results were striking: for human judges visually comparing the human- and computer-generated glyphs, the algorithm’s imitation of these handwritten versions of these alphabets were done so well that they were “mostly indistinguishable” from the human samples.

“Inferring motor programs from images”: Humans and machine imitate handwritten characters, with an emphasis on finding the correct stroke order.

Can you guess which ones are drawn by humans or machines? Humans and machines were given an image of a new handwritten character (top box) and asked to produce new examples. The nine character grids in each pair that were made by a machine are (by row): 1, 2; 2, 1 and 1, 1.
The new algorithm differs from other machine learning algorithms in that it needs no human programmer to intervene during the training process. The algorithm programs itself to reproduce the new visual concepts it sees. Unlike other standard models, it uses these simple, probabilistic programs produce new outputs with each execution of the code, allowing the machine to quickly learn, recognize and recreate concepts, much like a human child.
“Before they get to kindergarten, children learn to recognize new concepts from just a single example, and can even imagine new examples they haven’t seen,” says Joshua Tenenbaum, study author and cognitive sciences professor at MIT. “We are still far from building machines as smart as a human child, but this is the first time we have had a machine able to learn and use a large class of real-world concepts — even simple visual concepts such as handwritten characters — in ways that are hard to tell apart from humans.”
While the algorithm is currently restricted to handwritten characters, the researchers believe that it could be applied someday for learning spoken words, gestures or even abstract knowledge, with the aim of developing a machine version of the “one-shot learning” that humans seem to be so adept at. It would mean presenting data to the model in a different way, and departing from the prevailing practice of using enormous amounts of data to train neural nets. Says study lead author Brendan Lake: “The key point is that we need to learn the right form of representation, not just learning from bigger data sets, in order to build more powerful and more human-like learning patterns.”
Read more over at University of Toronto and Science.
Images: Danqing Wang, Brenden Lake, Ruslan Salakhutdinov, Joshua Tenenbaum