Nvidia Debuts AI-Ready Turing GPUs, with Real-Time Ray Tracing

GPU Manufacturer Nvidia has introduced a new chip architecture designed specifically for artificial intelligence deep learning work. The GPUs, dubbed Turing, also will be able to do real-time ray-tracing, a long sought-after capability for the $250 billion visual effects industry.
“This fundamentally changes how computer graphics will be done,” said Nvidia CEO Jensen Huang, during a Monday keynote at the SIGGRAPH professional graphics conference in Vancouver.
Huang touted this new, eighth generation, GPU architecture as the most significant advance since the company’s CUDA architecture, which was introduced in 2006. The company previewed a line of new GPU cards built on this design, the Quadro RTX 5000, 6000, and 8000, which will be available by the end of the year. It also revealed a reference architecture for the visual effects industry, the Quadro RTX Server, and released an open source a Material Definition Language software development kit, for mapping physical objects into rendering applications.
The Turing architecture is the result of 10,000 engineer-years of development, according to the company. It features a set of “RT cores” designed specifically accelerate ray-tracing — or the ability to simulate the trajectory that light and sound waves take in relation to the viewer actual perspective.
The company has estimated that the new architecture initially accelerates real-time ray tracing operations by 25x compared to the previous generation of GPUs, and by 30x over the speed of CPUs. To demonstrate this architecture, Huang showed how a Star Wars-themed Reflections ray-tracing demo can now run on a single Turing GPU, contrasting to an earlier demo that required a $70,000 DGX Station equipped with four Volta GPUs.
Huang boasted that Four 8-GPU RTX Servers should be able to do the rendering work of 240 dual-core servers, consuming 1/11th the power. This could reduce the time it takes to build out an animated shot to an hour, down from five or six hours.
“It’s going to completely change how people do film,” he said.
Turing also includes a set of “Tensor Cores,” for AI inferencing work. A single GPU can provide up to provide up to 500 trillion tensor operations a second. Important for deep learning work, tensors are data structures of related numbers that can be calculated against (much like matrices and vectors).
During the keynote, Huang explained how this muscle could be used to power deep learning operations for the visual effects industry: “At some point you can use AI or some heuristics to figure out what are the missing dots and how should we fill it all in, and it allows us to complete the frame a lot faster than we otherwise could.”
Nvidia has found support for Turing from system makers such as Dell EMC, HP, Inc., Hewlett Packard Enterprise, Lenovo, Fujitsu, and SuperMicro. Two dozen independent software vendors (ISVs) in the visual effects industry have also pledged support.
Check out the full presentation here: