There’s a lot of machine learning wrapped up into many of the bits of convenient technology we take for granted today: spam filters for our email inboxes, detecting fraud and counterfeits, as well as in data management. Of course, a lot of this machine learning magic happens invisibly in the background, but why these machine learning models make certain decisions over others is not always entirely clear, even to the experts who design them, making this aspect of AI quite a bit of a mysterious “black box.”
But there are apparently other ways to implement machine learning. Coming up with a novel approach to making the inner workings of machine learning literally visible, researchers from UCLA have 3D printed a neural network that processes information optically using light, rather than electrons.
In their recently published paper in Science, the team describes how they used a physical mechanism — namely, printed layers of diffractive material that represent the layers of an artificial neural network — to perform machine learning tasks. Their Diffractive Deep Neural Network (D2NN) would be capable of doing what any computer-based neural network might do, such as image recognition — but done at the speed of light.
To prove their concept, the team first trained an artificial neural network to recognize and identify handwritten numerals from 0 to 9. Since training a neural network takes a lot more computational resources, this training was done conventionally on a computer. But after that neural network was designed and trained, the team then proceeded to 3D print this finalized machine learning model as a stack of thin polymer layers that can allow light to pass through.
The pixellated texture of the layers is analogous to the artificial “neurons” that make up an artificial neural network, which is connected to other artificial neurons in the same or other layers in the network. According to the team, these printed neural networks act like a physical brain with its physical neural connections — except that in this case, it’s light that connects the artificial neurons and permits information to be transferred from one layer onto the next.
“Each point on a given layer either transmits or reflects an incoming wave, which represents an artificial neuron that is connected to other neurons of the following layers through optical diffraction,” wrote the researchers. “By altering the phase and amplitude, each ‘neuron’ is tunable.”
When presented with a handwritten number, a monochromatic laser light from the terahertz spectrum is shone through the layers, and the diffractive neural network can then categorize which number it is by focusing light onto one of the ten “detector regions” at the end of the stack.
In addition to categorizing numbers, the team also did some tests using a 3D printed neural network that could classify various items of clothing. For each specific type of data, the researchers had to print out a physical version of the trained neural network — similar to how a mechanical calculator would be built to perform arithmetic operations. The team’s experiments revealed that they were able to implement these relatively complex tasks at the speed of light, but at a relatively reduced accuracy; in the case of classifying numbers, at 91.75 percent, while the accuracy rate for identifying clothing hovered around the low- to mid-eighties.
While these levels of accuracy are lower than conventional, computer-based neural networks, there are advantages: besides processing inputs quickly at the speed of light, after being trained and printed, the neural network would also be able to run without any electricity whatsoever. While it’s not totally clear yet how such all-optical neural networks might be integrated and used on a regular basis, one can imagine that the technology could be used for instances to classify things moving at very fast speeds. It could also be used perhaps for facial recognition on a smartphone using visible light, without requiring power from the phone battery, or in the field of medical imaging, though it would require redesigning cameras to somehow include such optical neural networks. In addition, there’s the question of how to compensate for errors resulting from the 3D printing process itself. In any case, the team now hopes to scale up their neural network to potentially handle more complex tasks, using more layers and finding ways to improve accuracy.
The New Stack is a wholly owned subsidiary of Insight Partners. TNS owner Insight Partners is an investor in the following companies: Bit.