Modal Title
Machine Learning

AI Researchers Create Self-Replicating Neural Network

Apr 19th, 2018 11:00am by
Featued image for: AI Researchers Create Self-Replicating Neural Network

Reproduction, self-replication and the process of natural selection are most often things we associate with living organisms and how they naturally evolve. In fact, being able to reproduce is one of the defining aspects of a living organism. In evolutionary terms, natural selection involves passing on desirable traits — often ones that aid in the survival of an organism — on to following generations. Some of these functions may over time become part of an array of inborn, autonomic responses, which kick in from the moment of birth on — such as breathing air, opening one’s eyes or eliminating waste.

But what if an artificially intelligent system could also do the same thing — replicate itself and automatically learn successful traits gleaned from previous generations — much like a living organism would? AI developed in this way would be capable of improving itself gradually and continually, without needing to be trained from scratch each time. That’s the thought-provoking premise behind an interesting set of experiments outlined in a recent study done by Oscar Chang and Hod Lipson, two researchers from Columbia University.

Quined AI

Their paper, titled Neural Network Quines, explores how they went about creating and training a self-replicating neural network. Inspired by the concept of a non-biological, self-replicating machine advanced by mathematician John von Neumann back in the 1940s, Chang and Lipson’s approach is based on the the idea of a “quine” — a computer science term that refers to any non-empty computer program that takes no input and only produces a copy of its own source code as output. The term was coined by cognitive psychology professor Douglas Hofstadter, in honor of philosopher Willard Van Orman Quine, whose work focused on the mechanisms of indirect self-reference.

However, in this case, instead of producing source code, the neural network will produce a copy of weights, or number values which represent the strength of connections between two nodes in the neural network. But directly replicating the original parameters and weights of the neural network is not easy to do, so the team’s workaround is to set up a “vanilla quine” with a feed-forward neural network that will produce its own weights as outputs, which can be then used to solve a task.

In this case, the researchers put the “vanilla quine” network to work at an auxiliary task: classifying images from the MNIST database. It’s a test where machines are evaluated on their ability to correctly identify a series of handwritten, single-digit numbers. However, with only 21,000 parameters these “vanilla quine” networks are much smaller in scale compared to more complex image recognition models, which might have several million parameters.

Survival beats reproduction

The team found that the quine network demonstrated an accuracy rate of about 90 percent in identifying images correctly — which is good, but isn’t on par with more refined models. Interestingly enough, they noted that a significant amount of resources must be marshaled toward self-replication, and as the specialized image recognition ability increases, it becomes more difficult for the network to self-replicate. This discrepancy is one of the most fascinating findings from the experiment, as the team surmises that it’s an adaptive compromise, which seems to suggest that when it comes down to the line, the two functions might run counter to each other in some way.

“There are parallels to be drawn between self-replication in the case of a neural network quine and biological reproduction in nature, as well as specialization at the auxiliary task and survival in nature,” explained the team’s paper. “The mechanisms for survival are usually aligned with the mechanisms for reproduction, however when they come into conflict with each other, the survival mechanism usually is prioritized at the expense of the reproduction mechanism.”

While the idea of an AI capable of replicating and potentially ‘evolving’ itself over some number of generations at the expense of performance doesn’t seem all that useful at the moment, there might be some benefits in the future. For example, automatically self-replicating software would be indispensable in securing computer systems against adversarial attacks, or for repairing damaged systems. Ultimately, an AI that can reproduce itself would mean that there’s one less degree of difference between it and other living organisms — opening the door to other questions that aren’t so easily answered.

Feature image: Pixabay

Group Created with Sketch.
THE NEW STACK UPDATE A newsletter digest of the week’s most important stories & analyses.