Where are you using WebAssembly?
Wasm promises to let developers build once and run anywhere. Are you using it yet?
At work, for production apps
At work, but not for production apps
I don’t use WebAssembly but expect to when the technology matures
I have no plans to use WebAssembly
No plans and I get mad whenever I see the buzzword
Edge Computing

‘Photonic Accelerator’ Supercharges Optical Neural Networks

Mar 19th, 2021 10:45am by
Featued image for: ‘Photonic Accelerator’ Supercharges Optical Neural Networks
Feature image courtesy of Swinburne University of Technology

The emerging field of neuromorphic computing aims to develop ultra-energy-efficient computational systems that mimic the neuro-biological architectures that are found in the natural world. Such advances will be needed in the near future, especially with artificial intelligence (AI) becoming increasingly more complex, and therefore computationally more demanding.

One possible solution to this potential electronic bottleneck is using neuromorphic microchips that can quickly process large amounts of data using lasers, rather than electrons. One example can be found in what’s being touted as the world’s fastest and most powerful optical neuromorphic processor to date, as part of an optical neural network (ONN) which was recently developed by an international team of researchers from China, Canada, and led by the Swinburne University of Technology in Australia.

Capable of operating at an incredible 10 trillion operations per second (TeraOPs/s), the team’s new “photonic convolutional accelerator” could revolutionize how large-scale, intensive machine learning tasks are handled in real-time, as is the case with autonomous cars, scanning applications in a clinical setting, or with face recognition tasks.

As the researchers explained in a paper recently published in Nature, at the heart of the team’s microchip is a relatively new kind of component known as an optical micro-comb. Micro-combs work by creating a rainbow of infrared light that permits data to be transmitted with many different frequencies of light — all at the same time. Best of all, chips with integrated micro-combs can be made smaller, lighter, more energy-efficient and cheaper than other conventional optical counterparts.

“Our optical neural network represents a major step towards realizing monolithically integrated ONNs and is enabled by our use of an integrated micro-comb chip,” explained the team. “Moreover, our accelerator scheme is stand-alone and universal — fully compatible with either electrical or optical interfaces. Hence, it can serve as a universal ultrahigh bandwidth data compressing front end for any neuromorphic hardware — either optical or electronic-based — bringing massive-data machine learning for both real-time and ultrahigh bandwidth data within reach.”

The power of the team’s integrated micro-comb microchip was put to the test by training and gauging the accuracy of a deep learning convolutional neural network (CNN) in recognizing a series of handwritten numbers from 1 to 9. Widely used in a number of AI-based applications, convolutional neural networks (CNNs) are designed to function in a way similar to biological visual cortex systems, and ‘learn’ by abstracting input data, in order to allow them to identify similar specimens later on. While this is easy for humans to master after a relatively few examples, it can be a much harder task for a machine to grasp, depending on the computational capabilities of the underlying hardware. But armed with the team’s photonic convolutional accelerator and using a new technique that simultaneously interleaves data in time, wavelength and spatial dimensions through the integrated micro-comb, the CNN was able to perform with an accuracy rate of 88 percent.

As the team points out, the results of their experiments — all achieved on a single microchip — are all the more impressive, when compared to much more expensive hardware components like Google’s state-of-the-art tensor processing unit or TPU, which is capable of operating at 100 TeraOPs/s, but would require thousands of TPUs working in parallel.

“Handwritten digit recognition, although widely employed as a benchmark test in digital hardware, is still beyond the capability of existing analog reconfigurable ONNs,” noted the team. “Digit recognition requires a large number of physical parallel paths for fully-connected networks, which poses a huge challenge for current nanofabrication techniques. Our CNN represents the first reconfigurable and integrable ONN capable not only of performing high-level complex tasks such as full handwritten digit recognition but at ultrahigh TeraFLOP speeds.”

With the creation of this photonic convolutional accelerator, it’s likely that the team’s discoveries will open the door to further development of cutting-edge neuromorphic computational tools and state-of-the-art optical neural networks. The continuing evolution of such elements will potentially help to further advance AI tools, communication technologies and more, especially when it comes to processing massive-data machine learning tasks in real-time.

Read more in the team’s paper.

Group Created with Sketch.
THE NEW STACK UPDATE A newsletter digest of the week’s most important stories & analyses.