Everywhere you look nowadays, it seems that artificial intelligence is making enormous leaps and bounds. It’s gotten smart enough that it can trounce humans in a growing number of tasks — winning games like chess, Go and poker, as well as engaging in creative endeavors such as writing novels and music — all once thought of as unassailable by machines. We’re also seeing an emerging trend of AI-powered automation in industries like medicine, sales, retail and hotel management — making us wonder what will happen once the machines take all the jobs.
Nevertheless, despite these recent, high-profile achievements, AI still has some ways to go before it even comes close to truly imitating and even surpassing the complex mystery that epitomizes human intelligence. While there have been advances in getting machines to learn how to learn and reason like humans, current AI models are still relatively narrow in their scope, and have yet to embody the full range of cognitive abilities that humans use daily in solving a wide range of problems. This aim to create what’s known as artificial general intelligence (AGI) — or an intelligence that is as successful in performing any intellectual task that a human being can — still eludes experts.
But according to Demis Hassabis, co-founder of AI startup DeepMind, we may come a bit closer to solving the issue by first gaining a better understanding of how the human intelligence works. In a paper recently published in Neuron, Hassabis and co-authors Dharshan Kumaran, Christopher Summerfield, Matthew Botvinick make the case for forging stronger connections between neuroscience and the various fields of AI development in order to help create a true artificial general intelligence.
The authors point out that there are a number of advantages of translating these lessons learned from studying biological intelligence: “Neuroscience provides a rich source of inspiration for new types of algorithms and architectures, independent of and complementary to the mathematical and logic-based methods and ideas that have largely dominated traditional approaches to AI.”
Besides that, by studying how the brain’s cognitive systems work, we can gain better insights into what nature has deemed evolutionarily relevant and what will, by extension, be relevant in developing a smarter AI.
“Neuroscience can provide validation of AI techniques that already exist,” wrote the authors. “If a known algorithm is subsequently found to be implemented in the brain, then that is strong support for its plausibility as an integral component of an overall general intelligence system.”
Lessons from Neuroscience
Finding the links between neuroscience and artificial intelligence wouldn’t be a new thing, and the paper provides a good overview of significant milestones over the decades. Hassabis, who trained extensively as a neuroscientist before launching DeepMind, points out that early AI research in deep learning and reinforcement learning was built on prior neuropsychological studies of mammalian brains and animal behavior.
Current AI research continues that mutualistic relationship between nature and machine. For example, in developing artificial attention, researchers looked to the biological brain as a model, which generally consists of modular subsystems that govern various important functions.
This same approach of biomimicking what works in nature has also been applied to developing artificial versions of episodic memory (learning from experiences quickly in “one shot”), working memory (the ability to store and manipulate information within an active system) and continual learning (being able to master new tasks without forgetting previously learned skills).
“Virtual Brain Analytics”
Yet, despite this ongoing interdisciplinary sharing, Hassabis and his colleagues assert that the rift of intelligence between human and machine still remains quite large. This gap is due to our incomplete knowledge of biological brains, the underlying mechanisms of cognition and of the nature of consciousness itself. Similarly, this disparity is also due to the fact that the complex computations that drive AI can be an inscrutable “black box” — it works, but we don’t really know why.
But the mysterious landscape is now gradually becoming illuminated, thanks to new technologies such as brain imaging and genetic bioengineering, which allow neuroscientists to peer into and tinker with neural circuitry. This empirical knowledge can then be transferred toward creating novel neural architectures, capable of human-like learning, reasoning and intuiting, creativity, imagination and hierarchal planning in order to effectively tackle complex, real-world problems.
Hassabis also proposes further developing what he calls “virtual brain analytics,” or tools for opening up that figurative “black box” of AI systems. These tools to analyze and pick apart the inner workings of the “virtual brain” would be inspired by techniques already being used in neuroscience, such as tools for visualizing brain states and mapping receptive fields.
Ultimately, Hassabis and his colleagues believe that for AI to progress and evolve beyond a highly specialized but generally weak level, and more toward an intelligence approaching that of human-level complexity, AI researchers will need to actively collaborate with neuroscientists. As both fields grow and expand, it will be difficult for people to become experts in both disciplines, creating a need for a “common language” between the two to help identify shared observations and discoveries.
“Our view is that leveraging insights gained from neuroscience research will expedite progress in AI research,” explained the authors. “The exchange of ideas between AI and neuroscience can create a ‘virtuous circle’ advancing the objectives of both fields.”
Images: Many Wonderful Artists (Public domain).