Development / Machine Learning / Technology

Omniverse: Nvidia’s Ambitious Platform for the Virtual World

16 Nov 2021 8:00am, by
Omniverse illustration

Virtual space is the next world to conquer, the way Jensen Huang sees it.

The Nvidia founder and CEO over the past 15 years has steered the company from its roots as a graphics chip maker for consumer devices into a significant data center player with its CUDA-based GPU-accelerated computing efforts in HPC and the enterprise and then, building on that, into a leader in the fast-growing artificial intelligence (AI) space.

The company last year introduced Omniverse, a virtual world platform based on Nvidia’s broad portfolio of GPUs, software, libraries and other enabling technologies. Omniverse initially gives off a vibe similar to that of Second Life, the application developed more than a decade ago in which users could create avatars and inhabit a virtual world. The software got a lot of buzz initially but quickly tailed off.

However, Huang sees a future where Omniverse is a more vibrant and robust environment than the physical world.

“This new world will be much larger than the physical world,” the CEO said last week during his keynote address at the vendor’s virtual GTC event. “We will buy and own 3D things like we buy 2D songs and books today. We will buy, own and sell homes, furniture, cars, luxury goods and art in this world. Creators will make more things in virtual worlds than they do in the physical world.”

Enterprise Use of Omniverse

Some of the worlds in Omniverse — based on a technology from Pixar called Universal Scene Description and built on Nvidia’s RTX computer graphics technology — will be used for socializing and gaming, but more will be created by scientists, creators and companies, he said.

Ian Buck, vice president and general manager of Nvidia’s Tesla data center group, told journalists that Nvidia is seeing growing enterprise adoption in two areas, with one being in creating virtual agents for kiosks, call centers and other customer-facing technologies.

However, the other is in the area of digital twins, which are essentially complex visual representations of physical systems and environments that developers, scientists, researchers and others can use for such tasks as running experiments, testing products and technologies and proving or disproving theories without having to use physical tools or facilities. It’s less costly and speeds up production when users can do their work through a computer image rather than a physical environment that might be too expensive or unavailable.

Omniverse has been downloaded 70,000 times by designers in 500 companies, Huang said, adding that there also are 14 connectors to the platforms that have been built by other companies and developer communities, with another 15 on the way soon.

“Omniverse digital twins are where we will design, train and continuously monitor robotic buildings, factories, warehouses and cars of the future,” he said.

Rob Enderle, principal analyst with The Enderle Group, told The New Stack that seeing organizations leverage digital twins this early into the life of Omniverse makes sense.

“Digital twins are the first iteration because Omniverse is mainly used as a simulation and testing platform for factories and autonomous machines,” Enderle said. “They will serve as the foundational elements of the Metaverse as we expand that simulation to emulate both the natural and imagined worlds.”

Platform Enhancements

At GTC, Nvidia unveiled a number of enhancements and new features to Omniverse, including Showroom — an app of demos and samples showcasing the AI, graphics and other technologies in Omniverse — and Farm, for orchestrating the processing of batch jobs across multiple systems (such as bare-metal or virtualized workstations and servers). Omniverse AR can stream graphics to phones or augmented-reality glasses and Omniverse VR is a full-frame interactive ray-traced virtual reality technology.

Key to the development of digital twins was Nvidia’s launch last week of Modulus, a platform for training neural networks by leveraging the laws of physics as well as other data in multi-GPU and multinode environments.

Digital Twins at Work

Nvidia pointed to a number of companies that are using Omniverse and Modulus to build digital twins. Siemens Energy created a digital twin platform to accurately predict corrosion in heat recovery steam generators, which will help reduce unplanned downtime by 70% and save the industry almost $2 billion a year. Automaker BMW is building digital twins of four factories totaling 10 million square meters to teach robots new skills.

Ericsson is building a digital twin of an entire city to configure, operate and continuously enhance a fleet of 5G antennas and radios. Through the digital twin, the telecommunications company can determine how best to place and configure each site for the best coverage and network performance. In this digital twin, Ericsson can get as exact as building materials, vegetation and foliage, after which wireless network components are placed into precise locations.

Planning for Earth-2

At the show, Huang introduced an ambitious plan to create a digital twin of the Earth over the next two years. Nvidia will fund the Earth-2 project, leveraging Omniverse and Modulus. The goal is to use the Earth’s digital twin to help scientists and researchers address the negative impacts of climate change.

“All the technologies we’ve invented up to this moment are needed to make Earth-2 possible,” he said.

During a phone meeting with journalists, Huang said he wasn’t ready to talk about the architecture or the location of the system that will be used to create Earth-2. Nvidia is no stranger to supercomputers — it has such systems as Selene and Cambridge-1 — but the CEO said that the technologies used “will allow us to create the most energy-efficient supercomputer ever created. But it will be incredibly powerful, and it will be a supercomputer that’s designed for Omniverse because if you imagine Earth is a physical thing, this will be the engine of alternate worlds.”

Huang also envisions a time when other organizations will use the Nvidia system and Omniverse to create their own digital twins of Earth, adding that it would create a “virtuous cycle” of work to improve the planet.

“If we put [data about the Earth] to good use, then more satellite data could be demanded,” he said. “It would be good for the industry. The virtuous cycle is going to start very soon because of Omniverse and because of the reconstruction of the Earth digital twin. There’ll be thousands of these virtual twins. Hopefully, there’ll be millions of these digital twins and each one of the Earth digital twins will study and monitor one aspect of all of the Earth — one layer, one dynamic of the Earth.”

Enderle said Earth-2 will be a significant endeavor for Nvidia and crucial to addressing climate change. He noted that current climate models are “massively incomplete,” making the models inadequate, similar to a doctor trying to diagnose a patient over the phone and offer remedies to symptoms that may not be accurate.

“Building Earth-2 is an endeavor in line with going to the moon in that it will require massive resources and time to create,” the analyst said, adding that “unlike the moon effort, which was primarily limited to NASA and its suppliers, those working on Earth-2 will be a far larger international group and include ever more capable AIs doing a lot of the repetitive, labor-intensive roles.”

What comes out of the effort “should be a far more accurate model, which not only will be far more accurate predictively but likely highlight a remedial path that is both more effective and far less expensive than otherwise would be the case,” he said. “I doubt it will be possible to address climate change timely without this tool.”

Feature image courtesy of Nvidia.