Nvidia, VMware Push AI Out to the Enterprise
Nvidia added another piece to its widening artificial intelligence computing portfolio Aug. 24 with the full, general availability, release of its AI Enterprise suite of software tools and frameworks designed to expand the accessibility of AI and machine learning capabilities to a wider range of enterprises.
The company, which for several years has made AI a central part of its growth strategy and advanced that effort through a deep and multilayered partnership with VMware announced in 2020, first unveiled AI Enterprise in March, making it available to select customers via an early access program.
With the release of AI Enterprise 1.0, Nvidia is bringing AI training and inferencing into the reach of enterprises that have been looking to leverage the technologies in their data centers but had to contend with the high costs and complexities of having to pull together many of the specialized tools and other components themselves while they’re already facing a widening skills gap.
“AI is difficult to implement, and the reason is because, on the one hand, it’s an end-to-end problem, from the acquisition of data and training to produce models and then deploy the models in production,” Manuvir Das, head of enterprise computing at Nvidia, said during a press briefing about AI Enterprise 1.0. “But it’s also a top-to-bottom full-stack effort, because you have to think about the hardware, the software, the frameworks to make it easy to deploy AI as well as an ecosystem of ISVs [independent software vendors] who can take AI and put it in the hands of the customer.”
Putting the Pieces in Place
Nvidia over the past several years has been putting the various pieces of its AI portfolio into the place, including its own GPU compute accelerators and, more recently, its BlueField data processing unit (DPU), an accelerator that provides Ethernet and InfiniBand connectivity for traditional applications and more workloads, offloading much of the data processing chores from the CPU. The company’s hardware lineup also includes the DGX, HGX and EGX systems for running AI and high-performance computing (HPC) workloads.
Nvidia also has a range of vertical-specific AI frameworks for such industries as health care, smart cities, videoconferencing, cybersecurity and conversational AI, and a software tier that includes its Base Command, a platform for AI development, and Fleet Command for deploying, managing and scaling AI workloads. Enterprise AI now has a place in that software layer.
The tools and frameworks within AI Enterprise work in systems running VMware’s vSphere cloud computing virtualization platform that the company recently integrated with Tanzu, its suite of products and solutions that enable organizations to build and run Kubernetes-managed container and virtual machine (VM) workloads. All of this runs atop Nvidia-certified servers from the likes of Dell Technologies, Hewlett Packard Enterprise, Lenovo, Inspur and Supermicro.
“Enterprise customers run their own business applications today, but when the enterprise runs the line-of-business applications, whether it’s SAP … or CRM, VMware is there,” Das said. “It is the de facto operating system of the enterprise data center, and this is why we got into a partnership with VMware over a year ago about bringing AI to … mainstream servers with VMware vSphere so that every enterprise customer could take their own internal private cloud, where they have deployed line-of-business applications, and accelerate that environment and add AI functionality to all of those things.”
An AI Partnership
The partnership between Nvidia and VMware came at a time when both were responding to the rapid changes occurring in the IT world. Nvidia about a decade ago broached the idea of using GPUs that were commonplace in video games in data centers as accelerators, working alongside CPUs from Intel and AMD to improve the performance and efficiency of servers. The company fueled the growth of the current accelerated computing push in both HPC environments and mainstream enterprises. Nvidia got on the AI track a few years ago and has been expanding its capabilities since.
VMware, which had built its name by changing the dynamics within the data center with its server virtualization and virtual machine technologies, in recent years has been adapting to the rise of the cloud and edge computing by morphing into a significant hybrid cloud player, from expanding vSphere and partnering with top cloud providers like Amazon Web Services (AWS), Microsoft Azure and Google Cloud to extending its reach beyond VMs and into Kubernetes, containers and microservices with Tanzu.
The partnership builds on the ambitions of both companies, giving them greater reach into an enterprise space eager to leverage AI and machine learning by using familiar tools, software and platforms, and running them on mainstream, and certified, hardware.
“What this means for an enterprise customer [that] for the first time is doing AI on-premises, is having an expectation that if they pick a particular version of VMware and a particular version of Nvidia Enterprise AI, then they have the backing from both Nvidia and VMware that these systems are tested together and certified to run in production in any environment that the customer chooses to run them,” Das said.
Nvidia’s 2.5 Million Developers
Nvidia brings a lot to the table after years of building out its AI capabilities, he said. That includes 2.5 million developers using its technology; more than 24 million downloads of CUDA, Nvidia’s platform for parallel computing and programming for general computing on GPUs; 2,000 GPU apps; and 8,000 AI startups leveraging the portfolio. These numbers add to Nvidia executives’ belief that the company is uniquely positioned in the AI space “because of the hardware we’ve created, the software that we’ve developed and designed, the ecosystem that we built with millions of developers, and our own in-house experience doing AI, from robotics, self-driving cars, etc., for many, many years,” Das said.
Its aggressive push into AI is not a shot in the dark. Worldwide revenues for AI are expected to hit $341.8 billion this year, a 15.2 percent year-to-year jump, according to IDC — and grow another 18.8 percent in 2022. It also will remain on track to blow past the $500 billion mark in 2024, the analysts said in a report this month.
“Disruption is unsettling, but it can also serve as a catalyst for innovation and transformation,” Ritu Jyoti, group vice president for AI and automation research at IDC, said in a statement. “2020 was the year that accelerated digital transformation and strengthened the value of enterprise AI. We have now entered the domain of AI-augmented work and decision across all the functional areas of a business.”
Nvidia further expanded its AI capabilities by announcing a partnership with Domino Data Lab, whose Domino Enterprise MLOps platform is being validated with Nvidia AI Enterprise. Domino Data Lab’s platform is built to support the entire life cycle of AI, from the acquisition of data to the production of the end product. It also addresses a key enterprise concern about consistency in the process.
“Domino Data Lab has an entrenched system that solves that problem of reproducibility of the same training against the same data set,” Das said. “It has various capabilities of governance so that you can understand where this came from, what large hardware that it’s used in, a variety of challenging situations.”
To use AI Enterprise, organizations must be using vSphere 7, Update 2. Nvidia’s suite is essentially a piece of software that gets deployed. In addition, Nvidia’s licensing model for its software exactly matches the model of VMware of licensing per CPU, rather than GPU, Das said.