Artificial intelligence has evolved at lightning speed in the last several years, whether it’s cropping up various bits of everyday technology like the intelligent personal assistants on our smartphones, or more specialized applications such as discovering new drugs, cracking the mystery of how proteins fold three-dimensionally, or beating humans at their own games. But these myriad tasks demand a lot of computational power — often outstripping what typical silicon-based hardware can offer up. That’s because of the way they are built, as the architecture of conventional, silicon-based computer hardware has data being sent between separate memory and processor units. Compare that with the architecture of the human brain, which stores and processes information within the same synapse (the junction between two neurons in the brain), resulting in a relatively more effective and low-energy operation.
We can certainly translate lessons from neuroscience to help inform how future AI systems will be built — after all, we have neuromorphic systems that use electronic components to do that already mimic the functioning of the human brain. Now an international team of researchers from the University of Münster, Oxford University and University of Exeter have taken that approach one step further, recently unveiling a neurosynaptic microchip that uses light photons and algorithms to learn, store, and transfer information more efficiently.
“Our system has enabled us to take an important step towards creating computer hardware which behaves similarly to neurons and synapses in the brain and which is also able to work on real-world tasks,” said University of Münster professor Wolfram Pernice, who was one of the lead authors of the study that was published in Nature. “By working with photons instead of electrons we can exploit to the full the known potential of optical technologies — not only in order to transfer data, as has been the case so far, but also in order to process and store them in one place.”
As the paper details, the team’s neurosynaptic chip contains artificial neurons, and “all-optical synapses” that are built with optical waveguides — physical structures that guide electromagnetic waves in the optical spectrum. These waveguides are made with phase-changing materials, which can change their light transmission properties, depending on whether the material is in an amorphous or crystalline state. For instance, when a light source heats up the phase-changing material, its atoms shift from an orderly, crystalline arrangement into something more chaotic, amorphous, resulting in the synaptic waveguide letting more photons through.
According to the researchers, this remarkable change in state allows their artificial neurosynaptic network to function very much like a biological synaptic analog. During their experiments, the team managed to construct a microchip with four artificial neurons and a total of 60 synapses, which can be layered in order to scale things up to build more complex neurosynaptic systems. Photonic data is sent using wavelength division multiplex technology, where multiple light wavelengths (or colors) are used to send information over the same medium.
Now here’s where things get interesting: to test their system, the team carried out supervised learning tests, where it was shown a series of numbers or a letter, and then asked if it matched another series of numbers or letters. Training data was fed into the system as a series of light pulses, and the team found that their all-optical neuromorphic system could indeed solve these simple image recognition tasks. For situations where the output is not known, unsupervised learning tests were used, where the network is set loose to learn patterns on its own, using a waveguide feedback loop and a simplified learning rule that is expressed by overlapping output pulses with input pulses, as they are fired.
As the team notes, since the system architecture directly processes optical data, it has the potential to perform several orders of magnitude faster than biological neural networks. Such a system could easily be scaled up to perform more complex, generalized tasks that would be required by deep learning applications.
“This integrated photonic system is an experimental milestone,” said Pernice. “The approach could be used later in many different fields for evaluating patterns in large quantities of data, for example in medical diagnoses.”
Read more over in Nature.
Images: University of Münster, Oxford University and University of Exeter
The New Stack is a wholly owned subsidiary of Insight Partners. TNS owner Insight Partners is an investor in the following companies: MADE, Real.