What VMware’s AI Vision Means for Your Job
Developers and operations folks are understandably wondering how AI will impact their current and future job roles amid the substantial noise and hype surrounding it. The gradual emergence of AI-driven disruptions is becoming more apparent.
Following last week’s VMWare Explore 2023, the company communicated clear possibilities for developers, IT teams and organizations to leverage and depend on generative AI while developing applications, especially for multicloud environments. This was demonstrated through a series of AI-Related announcements including the formation of a “Private AI Foundation” with the AI GPU giant Nvidia, which illustrated how this concept can be implemented.
Additionally, the elephant in the room addressed how Broadcom will fit into this equation following its planned acquisition of VMware. This question was certainly not addressed specifically, while there are many indications that VMware will remain autonomous as Broadcom expands its reach through the VMware brand as Broadcom has done in the past following the acquisitions it has made.
In a recorded keynote, Broadcom President and CEO Hock Tan did note Broadcom’s plans to invest an additional $2 billion annually in R&D and that VMware and now Broadcom would jointly continue on a multicloud path. “Broadcom and VMware… will advance the vision of running workloads in a multicloud environment,” Tan said.
It’s @nvidia‘s CEO Jensen Huang and @VMware
CEO Raghu Raghuram discusses live at the #VMwareExplore Keynote. Private AI Foundation will integrate VMware’s Private AI architecture built on VMware Cloud Foundation, with NVIDIA AI Enterprise accelerated computing. @VMwareExplore pic.twitter.com/C4I4wrQw4M
— BC Gain (@bcamerongain) August 22, 2023
There was no shortage of generative AI announcements that VMware made during VMware Explore 2023. A significant milestone was communicated by VMware’s introduction of VMware Private AI Foundation with Nvidia to, among other things, allow organizations to apply Nvidia GPU-assisted AI applications across VMware’s multicloud infrastructure. During a keynote, Jensen Huang, founder and CEO, Nvidia, described how VMware has worked for many years “on this vision that we’re talking about today.”
“A quarter of a century ago, VMware reinvented enterprise computing and defines it to this day. VMware is the operating system of how the world’s companies are run. Today we’re reinventing enterprise computing after a quarter of a century. In order to transition to the future, to accelerated computing and to generative AI,” Huang said. “Now, our teams consisting of hundreds of engineers worked on this for several years. This is groundbreaking computer science: Instead of virtualizing applications to run on CPUs, we virtualized the GPU.”
It is now possible for VMware to offer “bare-metal” performance runtimes security across multiple GPUs and multiple nodes,” Huang said “For the very first time, enterprises around the world will be able to do ‘private AI’ at scale deployed into your company and to know it is fully secure and multicloud,” Huang said.
Amanda Blevins, @VMware vp and CTO, Americas, says Intelligent Assist offers a direct-to-user benefit when you using VMware’s enterprise AI. #developer #GenAI @VMwareExplore #vmwareexplore2023 https://t.co/UcnhAdcxej pic.twitter.com/Td1fqDXl5F
— BC Gain (@bcamerongain) August 24, 2023
Other AI-related announcements included:
- VMware Private AI Reference Architecture for Open Source to help customers achieve their desired AI outcomes by supporting open source software (OSS).
- A new VMware AI Ready program, which will connect ISVs with tools and resources needed to validate and certify their products on VMware Private AI Reference Architecture.
- Intelligent Assist, a family of generative AI-based solutions trained on VMware’s proprietary data for building, managing and securing multicloud infrastructure.
Other VMware Explore 2023 announcements included:
- VMware NSX+: a cloud-managed service offering of NSX for multicloud environments that offers networking and security capabilities for VMware Cloud.
- NSX+ virtual private clouds (VPCs): provides isolation of networking, security and services to multiple tenants on a shared VMware Cloud infrastructure managed by a single NSX interface.
What Does This All Mean?
At this juncture, we can assert that VMware at VMware Explore 2023 has introduced the context in which developers and operations teams can readily embrace the advantages of generative AI in ways that could potentially reshape our work significantly.
VMware’s vision provides insight into what developers might experience in the near and possibly long term. This is not to suggest that this is the exclusive or definitive solution, but it is a reasonable assumption that VMware has laid the foundation to present a compelling case. Many developers, some of whom may question or not fully grasp the implications of generative AI, can take solace in the fact that it entails analysis and input based on information fed into a machine learning model, thus generating new data from those inputs.
Generative AI streamlines many of the mundane tasks and supplementary activities of app development, leading to increased motivation and productivity, Torsten Volk, an analyst for Enterprise Management Associates (EMA), told The New Stack during a post-conference interview.
Tasks such as configuring development, staging and production environments, creating and managing documentation, maintaining current test cases, overseeing version control, mastering diverse cloud platforms and end-user devices, optimizing code for performance, resolving scalability issues, handling CI/CD pipelines, offering customer support and adapting the app for different regions are just a few of the administrative responsibilities that generative AI can greatly reduce, he said.
“While this prospect is already intriguing, I envision generative AI as a programming companion that developers can brainstorm ideas with, warn them about potentially limiting choices at an early stage, automatically notify them when they are ‘reinventing the wheel,’ and provide access to existing code components,” Volk said. “Furthermore, its capacity to ingest and process vast amounts of information on an ongoing basis positions it to function as a developer’s productivity ally.”
@VMware‘s @cswolfChris on using #ML for the #developers: “We didn’t just try to throw a large language model at a finite problem.” The model was tuned specifically for software development. “That’s what was the whole premise of the model itself.” @VMwareExplore pic.twitter.com/AtT8gHjIVj
— BC Gain (@bcamerongain) August 23, 2023
For pilot projects using AI that VMware put into place, best practices are already beginning to emerge. Plugging in code and inputs into Chat GTP4 was a no-go from the outset. “The challenge with AI is that if you look at the solution in isolation you don’t have the breadth to say ‘what are the other implications.,’ The reason we were saying no to Chat GPT-4 integrations was because they could in turn violate the privacy and compliance mandates that our customers have. Therefore we cannot do these things in products in the way that some folks would desire,” Chris Wolf, vice president of VMware AI Labs, said during the VMware Explore panel “Responsible AI: What Role Should Humans Play?” ”Since then, we’ve formed a company council to form some guidelines.”
Rather than employing VMware’s entire source code to train the AI model for a pilot project, Wolf said top-performing VMware software engineers were identified to utilize their code as the foundation for the model’s dataset. The “meticulous” strategy allowed the team to construct a high-quality code base model,” he said.
“Another significant aspect here is that we didn’t simply apply a large language model to a specific problem — we intentionally sought out a model tailored specifically for software development,” Wolf said. “This approach formed the core premise of the model itself, contributing to the delivery of highly accurate results.”
“The outcome was remarkable: the automation seamlessly integrated our commenting style and displayed an impressive contextual awareness that significantly bolstered our software development efforts. Consequently, around 80 software engineers… resulting in a remarkable acceptance rate of over 90% among the engineers, akin to the significant portion of water that remains in the pool. This achievement left a lasting impression on me.
Analyst Torsten Volk from EMA offered a depiction of how both developers and operations teams can utilize VMware as a platform on Tanzu. Following VMware Explore, Volk detailed in a blog post how it is currently possible to harness AI and the compute capabilities furnished by Tanzu for multicloud searches using GraphQL. This approach allows for queries across multiple clouds through a single API, rather than navigating through each cloud or on-premise data center database one by one.
As Volk described, by utilizing GraphQL, it becomes feasible to leverage the AI functionality that is now available. Users can simply write a query seeking the most optimal time and cost-wise conditions to execute GPU-intensive video editing tasks using data from a pool of allocated resources throughout the entire multicloud environment. Additionally, questions arise about what code should be employed to initiate these commands using Terraform or other infrastructure-as-code practices.
This is where AI takes over and provides an output, determining the best time and code structure to perform the task. Some might mistakenly label this as low code or no code utilization for executing operational tasks within a Tanzu Kubernetes environment.
It’s important to note that users must possess intricate knowledge and be skilled enough to verify not only the output from the AI-generated code but also, for instance, use an observability platform to verify the results. This is essential to ensure that, as humans, we keep the AI accountable and honest.
When utilizing our generative AI-based programming companions, developers must remain vigilant to thoroughly comprehend how the AI-generated code functions, Volk said. This is particularly crucial in the context of today’s data-driven applications; even a minor error in a query parameter could result in a substantially different overall outcome, he said.
“Such inaccuracies might prompt your organization to make decisions based on flawed results. This becomes particularly problematic when a central microservice, responsible for supplying critical input data to numerous dependents, is compromised. Consequently, even a minor issue can trigger exponential repercussions,” Volk said. “In summary, developer expertise is not becoming obsolete; rather, it is growing in significance. Highly skilled developers are increasingly becoming the most valuable players within organizations.”
The Cantellus Group CEO: Karen Silverman; “How will AI help in your daily life, whether that’s your personal or your corporate daily life and what are their repetitive properties you’d like to do away with faster and better? @VMware @VMwareExplore: pic.twitter.com/VJZf55LdE8
— BC Gain (@bcamerongain) August 23, 2023
As Karen Silverman, CEO and founder of advisory firm The Cantellus Group described during the VMware Explore panel “Responsible AI: What Role Should Humans Play?”, part of the role and definition of AI covers how AI models mostly produce outputs and those outputs are predictions. What we do with those outputs are what can be called outcomes.
“Humans need to be thinking very much about if my outputs are different than outcomes here. If I’m recommending what movie to watch or what shoes to buy or something like that, I probably don’t need a lot of distance between the outputs and the outcomes. But if I’m diagnosing a medical condition for a particular patient or underwriting loans for a particular person, then maybe I do need more geometry between my outputs and my outcomes and that’s a very human function to decide that form that understands the impact of that,” Silverman said. “So I would take humans at the beginning and give to the end that I would argue all along the way, but our role is going to be very integrated and to help.”