AI-Aided Coding
On average, how much time do you think you save per week by an AI-powered coding assistant such as GitHub Copilot or JetBrains AI Assistant?
I don’t use an AI coding assistant.
Up to 1 hour/week
1-3 hours
3-5 hours
5-8 hours
More than 8 hours
I don’t save any time, yet!
AI / Operations

Mind Reading Technology Uses Machine Learning to Decode Complex Thoughts

Jul 9th, 2017 4:00am by
Featued image for: Mind Reading Technology Uses Machine Learning to Decode Complex Thoughts

Technology to read people’s minds may seem like something that is taken straight out of science fiction or the classified files of secretive government agencies, but it’s something that may soon be a reality and on the mass market. Recent advances in non-invasively identifying thoughts in the human brain have been helped along by new neuroimaging technology, but are typically limited to pinpointing simpler concepts, like single images, words or numbers, or yes-or-no choices.

Now a team of researchers from Carnegie Mellon University are showing that is possible to identify more complex thoughts, ones that might be encapsulated in a sentence — or in other words, made up of a collection of semantic building “blocks” — using fMRI (functional magnetic resonance imaging) technology and machine learning to model the associated brain activation patterns.

“One of the big advances of the human brain was the ability to combine individual concepts into complex thoughts, to think not just of ‘bananas,’ but ‘I like to eat bananas in evening with my friends,'” said project supervisor and CMU psychology professor Marcel Just, in a CMU promotional story. “We have finally developed a way to see thoughts of that complexity in the fMRI signal. The discovery of this correspondence between thoughts and brain activation patterns tells us what the thoughts are built of.”

Mapping Complex Thoughts

The team’s findings, which were published in the journal Human Brain Mapping and funded by the Intelligence Advanced Research Projects Activity (IARPA), describe how the team used fMRI technology to digitally map out the brain’s coding of 240 “events” — as represented in “stimulus sentences” like “the witness shouted during the trial,” “The journalist interviewed the judge” and “The happy child found the dime.”

These sentences were built out of an inventory of 42 “meaning components” or as the researchers called them, “neurally plausible semantic features” (NPSFs) that draw on distinct characteristics like person, setting, size and physical action. The researchers noted that each type of information is processed in different parts and systems of the brain, and the overall neural picture of was mapped when each of the study’s seven participants finished reading of a sentence.

By training a regression model on the patterns of neural activation associated with each semantic characterization, the software was then able to predict the neural signature behind an entirely new sentence with new words. In addition, when shown the neural signature of a new sentence, the computer model was then also able to predict its underlying semantic content, with an accuracy rate of 87 percent.

One of the big challenges was to get around fMRI’s tendency to mix up brain signals. To tackle this, the team used an intriguing approach: they obtained “estimates” of neural signatures of individual word concepts by “averaging” the brain images of the same target concept as it appeared in different sentences (assuming that the signatures of the other words were “averaged out”). As the team explains in their paper: “The resulting estimates of the neural representations of concepts provide a robust basis for developing a mapping between semantic representations and brain activation patterns.”

“Our method overcomes the unfortunate property of fMRI to smear together the signals emanating from brain events that occur close together in time, like the reading of two successive words in a sentence,” said Just. “This advance makes it possible for the first time to decode thoughts containing several concepts. That’s what most human thoughts are composed of.”

The team now plans to move onto mapping the brain as it contemplates more complex concepts. “A next step might be to decode the general type of topic a person is thinking about, such as geology or skateboarding,” said Just. “We are on the way to making a map of all the types of knowledge in the brain.”

So while there’s still some ways to go for this technology to interpret even more complex thoughts, it would be no doubt be useful in many areas, like improved brain-machine interfaces (BMIs), secure authentication, law enforcement and even neuromarketing, as Facebook plans to do.

But there are many who remain justifiably cautious about such technology, as any tool can be used for good or for annoyance, such as Facebook potentially farming your thoughts to sell you ads — or for more even nefarious ends, such as governments potentially invading people’s privacy and monitoring their thoughts. That would set up a future Orwellian scenario where we might have to be careful of what we think, for fear of being punished for so-called thoughtcrimes. Of course, it would unrealistic to stop new technologies from developing, but it would be a good idea to make sure a healthy ethical debate around these issues remains at the forefront too.

Images: Carnegie Mellon University.

Group Created with Sketch.
TNS DAILY NEWSLETTER Receive a free roundup of the most recent TNS articles in your inbox each day.