TNS
VOXPOP
Will JavaScript type annotations kill TypeScript?
The creators of Svelte and Turbo 8 both dropped TS recently saying that "it's not worth it".
Yes: If JavaScript gets type annotations then there's no reason for TypeScript to exist.
0%
No: TypeScript remains the best language for structuring large enterprise applications.
0%
TBD: The existing user base and its corpensource owner means that TypeScript isn’t likely to reach EOL without a putting up a fight.
0%
I hope they both die. I mean, if you really need strong types in the browser then you could leverage WASM and use a real programming language.
0%
I don’t know and I don’t care.
0%
Operations

How Artificial Intelligence Could Reconstruct Your Memories

Aug 3rd, 2017 11:00am by
Featued image for: How Artificial Intelligence Could Reconstruct Your Memories

There isn’t a day that passes by that we hear some amazing new thing that artificial intelligence can do. From besting human champs at chess, Go and poker, to helping discover drugs and manage distributed micro-power grids, artificial intelligence is performing tasks that were once unthought of.

But the near-impossible is happening, and it’s happening fast. We’ve already heard how researchers are now training AI to decode complex thoughts in the human brain. Now a team of scientists from the University of Oregon is using AI to actually take someone’s memory and almost literally “pull it out from their brains” — or at least, an image of it.

The team’s findings were recently published in The Journal of Neuroscience, detailing how the contents of encoded memories can be retrieved from the angular gyrus (ANG) in the human brain, a part of posterior lateral parietal cortex which governs a number of functions including language, number processing, spatial cognition, attention and memory retrieval.

Computer Vision

Here’s how the multiple-part experiment was designed. For the first part of the experiment, each of the 23 participants in the study had their brain activity scanned in an fMRI (functional magnetic resonance imaging) machine when they were shown a series of photographs, each depicting a head shot of a different person.

The fMRI then detects any changes in the flow of cerebral circulation of these participants at the moment they see these photos, and these slight variations are recorded and processed in real-time by an AI software. Characteristics such as skin tone, eye shape and other visibly noticeable facial components were broken down into what are called eigenfaces — or vector values used in the computations underlying computer vision and facial recognition software.

“Using an approach inspired by computer vision methods for face recognition, we applied principal component analysis to a large set of face images to generate eigenfaces,” wrote the researchers.

These eigenfaces then rated within a numbering system so that it could be translated into something that the AI could parse as training data.

“We then modeled relationships between eigenface values and patterns of fMRI activity,” explained the team. “Activity patterns evoked by individual faces were then used to generate predicted eigenface values, which could be transformed into reconstructions of individual faces.”

Reconstructed Memories

For the second (or what might be called “mind-reading”) part of the experiment, the AI was then tested for its ability to reconstruct a new round of face photographs, using only data from participants’ recorded brain activity, culled via the fMRI machine. Based on the training data from the previous round, the AI was able to “translate” the test subjects’ neural patterns into eigenfaces that formed the basis of the reconstructed images. Here’s what came up:

It’s not incredibly accurate, yet at the same time, some eerie uncanniness might be emerging here.

In yet another test, participants were asked to recall someone’s face into their memories, which are stored and retrieved from the brain’s angular gyrus. These AI-powered reconstructions were surprisingly successful, with the AI able to draw out, for the most part, distinct qualities like gender, skin color and emotional expression.

To validate their results and to gain some insight into the inner workings of the brain, the team compared the reconstructions made by the memory-retrieving angular gyrus (ANG), versus the reconstructions made using the occipitotemporal cortex (OTC), which are sensitive to facial features.

“Strikingly, we also found that a model trained on ANG activity patterns during face perception was able to successfully reconstruct an independent set of face images that were held in memory,” wrote the researchers. “[Activity] patterns in… the angular gyrus, [support the] successful reconstruction of perceived and remembered faces, confirming a role for this region in actively representing remembered content.”

As one can see here, the so-called mind-reading capabilities of machines isn’t quite there yet. And as the researchers point out and as the results bear out, people still have control of how their memories take shape, and reconstructed memories seen in this experiment aren’t yet enough for someone to mentally identify accurately a suspected criminal beyond the shadow of a doubt, for example. But the technology appears to be making steps and we may eventually get to that point — someday.

Read the full paper over at The Journal of Neuroscience.

Images: University of Oregon

Group Created with Sketch.
THE NEW STACK UPDATE A newsletter digest of the week’s most important stories & analyses.