New Algorithm Will Help Supercomputers Simulate Whole-Brain Neural Connections

Artificial intelligence has made enormous leaps in recent years. We are seeing this technology incorporated in autonomous cars, collaborative robots and polyvalent deep learning systems that can master various boardgames on their own or reason their way around a subway map or a genealogical tree. Yet, there is still some ways to go before AI will transition from being relatively specialized to being able to master a variety of tasks as easily as humans.
One step towards developing this artificial general intelligence is to simulate how the human brain functions on a computer, in order to offer researchers deeper insights about the inner workings behind intelligence. The problem is, the human brain is incredibly complex, and even with the capabilities of the massive supercomputers available today, it is still impossible to simulate all the interactions between its 100 billion neurons and its trillions of synapses.
But that goal is now one step closer, thanks to a group of international researchers who have now developed an algorithm that not only accelerates brain simulations on existing supercomputers, but also takes a big leap toward realizing “whole-brain” simulations in future exascale supercomputers (machines capable of executing a billion billion calculations per second).
Computing for Whole-Brain Simulations
The research, published in Frontiers in Neuroinformatics, outlines how the researchers’ new method of creating a neuronal network on a supercomputer. To give a sense of how colossal this task is, existing supercomputers such as the petascale K computer at the Advanced Institute for Computational Science in Kobe, Japan can replicate the activity of only 10 percent of the brain.
That’s because it’s limited by the way the simulation model is set up, which affects how the supercomputer’s nodes communicate with each other. Supercomputers might have more than a hundred thousand of these nodes — each with its own processors to perform calculations. In larger simulations, these virtual neurons are distributed across compute nodes to balance the processing workload efficiently, however, one of the challenges of these larger simulations is the high connectivity of neuronal networks, which requires a massive amount of computational power to replicate.
“Before a neuronal network simulation can take place, neurons and their connections need to be created virtually, which means that they need to be instantiated in the memory of the nodes,” explained Susanne Kunkel of KTH Royal Institute of Technology in Stockholm, one of the paper’s authors. “During the simulation a neuron does not know on which of the nodes it has target neurons, therefore, its short electric pulses need to be sent to all nodes. Each node then checks which of all these electric pulses are relevant for the virtual neurons that exist on this node.”
To put it in a simpler way, it’s like sending a whole haystack to each node, so that each will need to find the needles relevant to it out of the haystack. Needless to say, this process consumes a lot of memory, especially as the size of the virtual neuronal network grows. To scale things up and to simulate the whole human brain using current techniques, it would require 100 times more processing memory than is available in today’s supercomputers. However, the new algorithm changes the game, as it optimizes this process by allowing the nodes to first exchange information on which nodes will send and receive to whom, so that afterward each node will only need to send and receive the information it needs, without having to pick through the whole haystack.
“With the new technology we can exploit the increased parallelism of modern microprocessors a lot better than previously, which will become even more important in exascale computers,” said study author Jakob Jordan of the Jülich Research Center.
With the improved algorithm, the team found that a virtual network of 0.52 billion neurons connected by 5.8 trillion synapses, running on the supercomputer JUQUEEN in Jülich, was able to simulate one second of biological time in 5.2 minutes of computations, rather than the previous 28.5 minutes it required, using conventional methods.
It’s predicted that future machines capable of exascale computing will surpass the performance of current supercomputers by 10 to 100 times. With the team’s algorithm — which will be made available as an open source tool — it would mean a greater ability to explore how intelligence functions holistically.
There’s no doubt that future findings based on this tool will not only help to push AI development further, it will also benefit a range of scientific disciplines, noted Markus Diesmann, study author and director at the Jülich Institute of Neuroscience and Medicine: “The combination of exascale hardware and appropriate software brings investigations of fundamental aspects of brain function, like plasticity and learning unfolding over minutes of biological time, within our reach.”
Images: Pixabay, Frontiers in Neuroinformatics.