VMware Extends Tanzu Support for Nvidia AI Enterprise
VMware is providing more support for DevOps teams with artificial intelligence (AI) and machine learning (ML) ambitions, as vSphere with Tanzu has been upgraded for developers of AI applications, the company co-announced today with AI and GPU partner Nvidia on the first day of VMWorld.
After previously announcing the availability of Nvidia AI Enterprise and vSphere with Tanzu for container access for AI developers, “now we’re now announcing the ability to have the Kubernetes orchestrator — that’s an important distinction,” said Lee Caswell, VMware’s vice president for product and technical marketing, during a press conference ahead of the event.
“The ability to have Tanzu Kubernetes support for container-based developers is an important next step in how we’re simplifying the deployment of AI,” Caswell said.
An important distinction to make is the difference between using vSphere or Tanzu with Nvidia AI Enterprise. As indicated above, the ability to use Nvidia AI Enterprise with Tanzu largely reflects VMware’s Kubernetes push overall.
Support to Help Fill a Skills Gap
Lending AI support for Kubernetes is increasingly deemed essential for those DevOps teams that are required to deploy and/or manage AI and ML applications but lack the resources to do so. In a recent poll by Enterprise Management Associates (EMA) of 275 developers and DevOps engineers, for example, 49% of developers reporting lacking the skill set to effectively use machine learning models.
“While Kubernetes provides the crucial scale-out capabilities needed to include AI-driven predictions into the development and DevOps lifecycle, software developers and DevOps engineers are struggling to benefit from this big advantage of container-based development,” Torsten Volk, an analyst at EMA, told The New Stack.
“This new level of integration between Nvidia AI Enterprise and VMware vSphere with Tanzu provides a set of APIs to simplify both: the development and operations of AI-driven application capabilities on Kubernetes. This could be a big step toward mainstreaming the use of AI across the entire enterprise.”
Automating Containerized Workload Delivery
The improved integration of Tanzu and Nvidia AI Enterprise is important for developers and DevOps teams that need a simpler way to add AI capabilities to their development and release pipelines, VMware said.
In March, VMware and Nvidia announced what VMware then touted as an “end-to-end” AI-ready enterprise platform that it said was easier for operations teams to deploy and operate. This joint platform included the Nvidia AI Enterprise software suite that was certified, optimized and supported for VMware vSphere.
Tuesday, VMware is also announcing that Tanzu Kubernetes Grid Service, included in VMware vSphere with Tanzu, is integrated with Nvidia AI Enterprise, enabling customers to automate the delivery of containerized workloads, and proactively manage apps in production.
“Nvidia AI Enterprise including Tanzu Kubernetes Grid for simplified MLOps is a significant announcement as DevOps and operations teams alike have been waiting for an enterprise-grade platform to develop, run and operate their AI projects on,” Volk said. “This could be that platform.”
Seeking One Unified Application Platform
An important distinction to make is the difference between using vSphere or Tanzu separately with Nvidia AI Enterprise. As indicated previously, the ability to use Nvidia AI Enterprise with Tanzu reflects VMware’s overall Kubernetes initiative.
“What you get with Tanzu is the ability to have Kubernetes orchestration, and for those watching that development space, Kubernetes has quickly evolved into the de facto standard for how to orchestrate the use of containers,” Caswell said.
“Now, containers are actually pretty interesting — containers are spun up more often, they’re deleted more frequently and so they’re actually quite volatile. So, the ability to now go and manage containers and orchestrate them with Kubernetes [with Tanzu] gives you an added level of security and management that we didn’t have in the past.”
The increased Kubernetes support is crucial in VMware’s quest toward providing one unified application platform that can stretch across the data center and public clouds, Volk said.
“The ‘code once, deploy anywhere’ paradigm becomes even more critical when it comes to the development and lifecycle management of AI models,” Volk said. “Offering one single set of infrastructure-independent APIs constitutes a critical differentiator for VMware against popular, but proprietary, public cloud offerings such as AWS Sagemaker and Azure ML.”