Earlier this month, an Uber self-driving car had accidentally struck and subsequently killed a woman crossing a road in Tempe Arizona. The accident was reportedly the first ever pedestrian death from an autonomous vehicle, seeding doubts in the minds of many that self-driving tests have a long way to go before they are considered road-worthy.
Still, auto manufacturers and IT companies such as Google, and Uber have invested million in the machine-learning-driven technology with the hopes that one day it will ultimately prove to be safer than human drivers. Autonomous navigation is fueled by machine learning, which requires untold hours of testing data. But how would they acquire this data without bringing more harm to the populace?
To this end, GPU manufacturer Nvidia has unveiled a cloud-based system to generate large numbers of photorealistic simulations that could be used to bank many hours of driving experience. The company unveiled the technology, called the Drive Constellation Simulation System, at its GPU Technology Conference (GTC), being held this week in San Jose, California.
“We’re able to build virtual worlds in a data center, and drive billions of miles to test autonomous vehicle algorithms,” said Danny Shapiro, Nvidia vice president and general manager of autonomous machines, in a GTC press conference. “Essentially, we are running the complete hardware-software solution, that would normally be in the vehicle, but we’ve moved it to the data center.”
The key advantage that Constellation would bring is scalability. About 770 accidents happen for every billion miles people drive, yet most self-driving research projects don’t come close to logging this many miles. A fleet of 20 vehicles could do a million miles per year, noted Nvidia CEO Jensen Huang, during his keynote at GTC.
Nvidia’s software could allow researchers to log in millions of additional test miles, without the need to test vehicles on the public roadways. It could constantly test against edge cases, such as driving into the sunset or in the snow, that in real-world examples only happen infrequently. The simulation software can recreate a wide variety of environments, including rainstorms, snowstorms, and blinding light.
Constellation is based on Nvidia’s platform for powering self-driving vehicles, called Drive. Drive can ingest multiple inputs from vehicle sensors, such as a vehicle’s camera, as well as ultrasonic sensors, and Lidar and Radar units, to provide a representation of the vehicle’s surrounding. Another component ingests this data and provides a safe path forward through the use of deep neural networks for the detection and classification of objects.
Constellation also includes Nvidia’s DRIVE simulation software, which generates sensory data that such systems would normally provide. The system is basically a closed feedback loop: Constellation is made up of two components, the Drive platform on one server collects data from the Drive Sim on the other server, which simulates drive data with a set of Nvidia GPUs. The two components trade data back and forth at a clip of about 30 times a second.
Waymo, a subsidiary of Alphabet that was spun off from the Google self-driving car project, had a similar idea of strengthening algorithms through virtual testing, with its Carcraft project. That project has logged 8 million self-driving vehicle miles through virtual environments, drawing on data from the company’s 25,000 vehicles.
Nvidia, whose GPUs are a natural fit for the matrix processing needed by deep learning, has become a big proponent for advancing applications of artificial intelligence. This year’s user conference — which the company anticipates will draw at least 8,000 attendees — will feature over 400 hours of AI session content. The company estimates that a Nvidia GPU, with the company’s CUDA library, can offer a 20x performance improvement in vector operations, compared to a standard CPU.
Drive Constellation will be fully released in the latter half of this year.