Machine Learning

Human Intelligence Could Arise from a Basic Brain Algorithm

4 Dec 2016 1:47am, by

So far, in the process of understanding human intelligence by building artificial versions of it, we have now created machines that can learn to learn, that exhibit artificial ‘imagination,’ and that are even capable of reasoning.

Yet, the exact underpinnings of what makes human intelligence tick continues to elude experts, even as the ongoing development of artificial intelligence, often modeled on its human progenitor, gives us tantalizing glimpses into how human intelligence might actually function and how it is organized.

But some scientists believe these seemingly complex gymnastics of intelligence may actually have a very simple pattern underlying them. That’s exactly what a team of scientists from Augusta University, Georgia is now suggesting: that the origins of human intelligence are based on a fundamental algorithm, which they call the Theory of Connectivity.

“A relatively simple mathematical logic underlies our complex brain computations,” explained Joe Z. Tsien, a neuroscientist and professor of neurology at the university’s Medical College. Tsien’s lab collaborates with neuroscientists, computer scientists and mathematicians to understand how the human brain produces memories and acquires knowledge, and is currently working on a long-term brain activity mapping initiative called the Brain Decoding Project.

The Theory of Connectivity, as outlined in the team’s paper, “Brain Computation Is Organized via Power-of-Two-Based Permutation Logic,” published in Frontiers in Systems Neuroscience, describes how the human brain’s estimated 86 billion neurons might arrange themselves in various “neural cliques,” with each group consisting of similar neurons.

According to the scientists, neural cliques are prewired, informing how neurons and their synapses might connect and function. It’s this essential framework that helps gain us knowledge but also permits us to generalize and see the ‘bigger picture’ of concepts and ideas — something that computers still find difficult to do.

“The brain is not a blank sheet. This complex wiring system that ends up being our brain, starts with these cliques,” said Tsien last year when he first publicly described the theory. “We think the brain has these combinatorial connections among brain cells, and through these connections, comes the knowledge and flexibility to deal with whatever comes in and goes out.”

The Theory of Connectivity postulates how the brain might be organized in these neural cliques, grouped together by function, to process cognitive and learning functions around survival essentials like food, fear and social experiences, as well as more abstract ideas and concepts. Depending upon the complexity of a concept, these cliques will cluster in what’s called functional connectivity motifs (FCM) — the more complicated an idea, the more neural cliques are enlisted into an FCM.

The power-of-two-based mathematical rule behind the Theory of Connectivity predicts how many cliques are needed for an FCM. The algorithm itself can be written as N=2i-1, where N represents the number of distinct neural cliques; i indicates the number of distinct information inputs the neuronal groups receive.

In this latest study published in Frontiers, Tsien’s team has since found evidence of this algorithm at play in seven regions of the brains of mice and hamsters. In presenting the animals with various stimuli like differing kinds of food or social interactions, the scientists monitored the animals’ neuronal activity by attaching electrodes to their brains.

The team then assessed the distribution patterns of the neural cliques in response to these stimuli and found over a dozen distinct neural cliques that demonstrated a “specific-to-general” pattern of neuronal clusters, as predicted by the mathematical logic of the Theory of Connectivity. It’s this “specific-to-general” pattern that gives the ability to grasp a specific idea, as well as the ‘big picture,’ resulting in a capacity to deal with uncertainty and infinite possibilities, which constitutes intelligence.

The theory’s mathematical logic seems to be a “unifying design principle” that informs the organization of the brain at a fundamental level, spanning from the most simple to the most complex neural networks, leading Tsien and his team to make analogies with other universal attributes. The findings will no doubt have an impact in how future artificial neural networks might be constructed.

“Many people have long speculated that there has to be a basic design principle from which intelligence originates and the brain evolves, like how the double helix of DNA and genetic codes are universal for every organism,” Tsien explains. “We present evidence that the brain may operate on an amazingly simple mathematical logic.”

Image: Augusta University

A newsletter digest of the week’s most important stories & analyses.

View / Add Comments

Please stay on topic and be respectful of others. Review our Terms of Use.