“Animal intelligence may be the next big thing in artificial intelligence,” proclaims a video on the IEEE Spectrum site. “But first, scientists must digitize a rat brain.”
The video accompanied a 3,000-word article that explained that, currently, “there are some situations when a three-year-old can easily defeat the fanciest AI in the world.” Specifically, children excel at “one-shot learning” — the ability to recognize something after only seeing it once.
“Humans have an amazing ability to make inferences and generalize,” said Harvard University neuroscientist David Cox, who is also an expert on artificial intelligence. Or, as IEEE Spectrum puts it, AI researchers are “deeply envious of toddlers’ facility with it.”
So there’s now an ambitious $100 million project funded by the U.S. the Intelligence Advanced Research Projects Agency (IARPA, an intelligence community cross-agency office) to reverse engineer this kind of smarts. Its five-year mission: to identify a brain’s “strategy” for simple feats of identification, so it can be recreated with algorithms.
The Machine Intelligence from Cortical Networks (MICRONS) program is focusing on that area where the brain processes what it sees — the visual cortex. But it’s actually attempting to map every neuron in one cubic millimeter of brain tissue. 50,000 neurons — interconnected through a half a billion synapses. The ultimate big data set.
No one has ever attempted this before.
The article points out today’s neural networks are “loosely inspired by the brain’s structure,” but that human brains ultimately have 86 billion neurons — or, 1.7 million times more than appear in that cubic millimeter. Trillions of connections are possible. That’s 1,000,000,000,000. But apparently, the important thing seems to be identifying the mechanism that underlies it all.
“The big gap is understanding operations on a circuit level,” the IEEE quoted (now former) MICRONS program manager R. Jacob Vogelstein, “how thousands of neurons work together to process information.”
If they succeed, the article suggests, the payoff could be enormous: “The government’s big bet is that brainlike AI systems will be more adept than their predecessors at solving real-world problems.”
One obvious application would be identifying faces from security-camera footage — but the hope is to apply this to more than computer vision. The entirety of the cerebral context has a “suspiciously similar” structure, according to Cox — which suggests there’s one fundamental circuit that the brain uses to process information. Identifying it would be a major step toward human-like general intelligence.
Their research focuses on embedding state-of-the-art technological equipment on the brains of rats and mice. “It all starts with a rat in a cage learning to play a video game,” quipped the video that accompanies the article. The rat gets rewarded with a drop of sweet juice if they correctly identify one of two images flashed on a small display.
Using a two-photon excitation microscope, the researchers first scan a live rat’s brain with an infrared laser to record flashes from a fluorescent tag that indicates when a neuron is active. “The microscope makes movies of neural activities,” explained the video, while the article adds poetically that “The 3D video shows patterns that resemble green fireflies winking on and off in a summer night.”
“You can watch a rat having a thought,” Cox explained.
“It’s very similar to how you’d try to reverse engineer an integrated circuit,” Vogelstein added. “You could stare at the chip in extreme detail, but you won’t really know what it’s meant to do unless you see the circuit in operation.”
“It may be much easier to engineer the brain than to understand it.” — George Church, Wyss Institute for Biologically Inspired Engineering
But that’s just the beginning. The rat’s tissue is then FedEx’d from Massachusetts to Illinois, where the U.S. Department of Energy’s Argonne National Laboratory performs a sophisticated imaging using a particle accelerator to produce “extremely bright” X-rays. After the tissue is captured from different angles, the X-ray images can be combined into a three-dimensional image.
Then the brain returns to Massachusetts, where it’s sliced into 33,000 strips, each one just 30 nanometers wide, captured with tape strips and delivered to silicon wafers. The 33,000 slides then meet “the world’s fastest scanning electron microscopes,” which runs continuously, and creates an image of each one at a 4-nanometer resolution.
Now it’s possible to see the axons that connect the neurons, and — since there’s millions of them — there’s another piece of automation, a piece of software that can automatically trace the axons from one tissue slice to the next, along with its thousands of connections to other neurons. Or, as IEEE Spectrum puts it, it “reconstructs all the neural wiring within the cube of brain tissue.”
Ironically, the computer isn’t as good as a human scientist, but “there aren’t enough humans on earth to trace this much data,” said Cox, who the article describes as “obsessively focused on automating every step of the process.” It’s a problem that software engineers at Harvard and the Massachusetts Institute of Technology are already working on. But however the diagram gets made, researchers will then combine it with the fluorescent images of the rat brain’s activity, which according to the article should reveal the brain’s computational structures.
“It should show which neurons form a circuit that lights up when a rat sees an odd lumpy object, mentally flips it upside down, and decides that it’s a match for object A,” the article’s writer, Eliza Strickland, noted.
In an interesting twist, they also plan to train a neural network on the same image-recognition task, and compare the results.
Meanwhile, there’s also another MICRONS project taking an entirely different approach. Researchers at Harvard and Carnegie Mellon University have genetically engineered mice so there’s a unique sequence of molecules on both ends of each axon (a technique called “DNA barcoding.”) Then their software can generate a map by connecting each pair sharing the same axon-identifying code. One of the team’s leaders — George Church, a professor at Harvard’s Wyss Institute for Biologically Inspired Engineering — thinks the technique could ultimately map the whole brain of a mouse. That’s all 70 million neurons and 70 billion connections.
Church ultimately even suggests that one day it may be possible to ditch the silicon altogether, and engineer brains with special circuits that speed up their cognition — in effect, to build better biological brains: “I think we’ll soon have the ability to do synthetic neurobiology, to actually build brains that are variations on natural brains.”
In the article, he shares a near-heretical thought — that successfully recreating a brain with circuits and algorithms may not ultimately provide an answer. “I think understanding is a bit of a fetish among scientists,” he tells IEEE Spectrum, in a refreshing counterpoint to the prevailing belief that everything we humans experience can be reduced to patterns of electric pulses.
“It may be much easier to engineer the brain than to understand it.”
Feature image: IEEE video.