Modal Title
Edge Computing / Machine Learning

Nvidia: AI Can Boost Simulation in VR, Quantum Compute

At its GTC conference, Nvidia showed continued progress on its goal of infusing AI into computing with new offerings that will enable AI in everything from robots and avatars to virtual worlds, supply chains and the emerging field of quantum computing.
Nov 9th, 2021 10:51am by
Featued image for: Nvidia: AI Can Boost Simulation in VR, Quantum Compute
Feature image: Nvidia CEO Jensen Huang, in a promo for GTC.

It was a decade ago that Nvidia CEO Jensen Huang said the company was going all-in on artificial intelligence (AI), saying the technology was going to form the underpinnings for computing and business in the future.

Nvidia officials since then have steadily put in place the pieces to create a platform that enables AI in a broad range of uses and to make it widely accessible to developers and enterprises. At the GTC conference this week, the company is continuing this steady drumbeat, showing how its offerings will enable AI in everything from robots and avatars to virtual worlds, supply chains and the emerging field of quantum computing.

“AI is a new element to computational science,” Ian Buck, vice president and general manager of Nvidia’s Tesla data center business, said during a briefing with journalists. “We can apply AI techniques and, in some cases, coupling or replacing aspects of simulation to achieve even higher levels of performance and solve problems that were not really obtainable before because of the computational complexity at the scale of what they’re trying to solve.”

Buck said AI and AI-enabled simulation are “delivering a millionfold improvement in 10 years and we’re very excited about the research community taking advantage of not just accelerated computing — of the scale of what’s possible with scale-up and scale-out data center computing — but also AI to deliver a millionfold of improvements in simulation performance.”

Large Language Models and Physics

In the area of AI software, Nvidia at the show unveiled ReOpt, an offering designed to help organizations more efficiently move products from factories to homes and businesses. ReOpt combines Nvidia’s RAPIDS suite of software libraries for data science and analytics with local heuristics algorithms and metaheuristics such as Tabu search.

Nvidia also rolled out Modulus, a framework for creating machine learning models for physics using digital twins — digital reproductions of systems — and a zero-trust cybersecurity platform for data centers that brings together Nividia’s Bluefield digital processing units (DPUs), DOCA software development kit (SDK) and Morpheus, an AI-based cybersecurity framework. The platform leverages GPU-accelerated computing and deep learning to detect threats and isolate applications from the infrastructure.

In addition, the company is giving enterprises the tools to develop large language models (LLMs) that can be used to create such AI applications as chatbots and personal assistants that can understand the nuances of language, enabling them to perform such tasks as translate languages, summarize documents and write compute programs.

Nvidia’s NeMo Megatron framework is designed to train language models that come with trillions of parameters. The Megatron 530B is a massive language model, with 530 billion parameters, that Nvidia officials first outlined last month. The framework also leverages Nvidia’s Triton Inference Service that now comes with multi-GPU and multimode distributed machine learning inference capabilities and works with Nvidia’s DGX AI systems.

NVidia Quantum Appliance

Enhanced Triton Inference Server

The new iteration of Nvidia’s Triton Inference Server — version 2.15 — comes with support for the Arm architecture, adding to the list of x86 chips and Nvidia GPUs that it already supported. It also integrates with Amazon Web Services’ (AWS) SageMaker machine learning platform and includes a new model analyzer, the new Forest inference library and the support for multi-GPU and multinode inference workloads.

“As these models are growing exponentially, particularly in new use cases, they’re often getting too big for you to run on a single CPU or even a single server,” Buck said. “Yet the demands, the opportunities for these large models want to be delivered in real-time. … The new version of Triton actually supports distributed inference. We take the model and we split it across multiple GPUs and multiple servers to deliver that to optimize the computing to deliver the fastest possible performance of these incredibly large models.”

NeMo Megatron, Modulus and Morpheus are among 65 updated and new SDKs — which include libraries, code samples and guides — that developers and data scientists use to address a broad array of AI-fueled computing efforts.

Nvidia Triton Server

A Focus on Quantum

Among the new SDKs is cuQuantum, one of a number of moves Nvidia announced to leverage AI in the burgeoning quantum computing space. CuQuantum is aimed at enabling accelerated quantum simulations and is one of a number of steps Nvidia is making into the quantum computing space. The work of applying the physics of the quantum space to computer algorithms is a complex problem, Nvidia’s Buck said.

Researchers are “basically trying to map traditional computer science to the wave equations in the quantum space, such that the right answer reinforces those waves and the wrong answer cancels them out,” he said. “We are seeing a doubling of the qubits [quantum bits] every year from research groups around the world [that are] building these physical systems. We project that in order to build a usable quantum computer that can solve real problems, we’re going to need on the order of a million to 10 million qubits to build a fault-tolerant quantum computer.”

That will come in the next decade, Buck said, adding that “in the meantime, Nvidia has a role to play in helping the world’s researchers figure out how to use these new quantum computers and come up with new kinds of algorithms that can map wave equations to computer science.”

NVidia quantum appliance

Libraries Go into Beta

The first library in cuQuantum is cuStateVec, an accelerator for the state vector simulation method that can scale to tens of qubits, which is in public beta now. The next library, cuTensorNet, another accelerator that uses the tensor network method, can handle up to thousands of qubits on near-term algorithms. That will go into beta next month.

Nvidia also unveiled the DGX Quantum Computing appliance, which is a DGX A100 system preloaded with cuQuantum and other frameworks to help organizations determine how to apply quantum computing to their work.

Running cuQuantum on an Nvidia DGX SuperPod — a data center solution for AI and high-performance computing (HPC) workloads — the company created a massive simulation of a quantum algorithm for solving the MaxCut problem, which is an optimization challenge that no current computer can solve efficiently. MaxCut algorithms are for designing large computer networks and in the quantum space, it could be used to demonstrate advantages from using a quantum computer.

Using the cuTensorNet library on Nvidia’s Selene in-house supercomputer and leveraging GPUs to simulate 1,688 qubits to solve the MaxCut problem, Nvidia was able to graph 3,375 vertices, which was eight times more qubits than the previous largest quantum simulation. It also was accurate, hitting 96% of the best-known answers, according to the company.

Group Created with Sketch.
THE NEW STACK UPDATE A newsletter digest of the week’s most important stories & analyses.