Red Hat Ansible Lightspeed Uses AI to Automate Infrastructure Management
AI and machine learning are poised to alleviate many of the often mundane, tedious, time-consuming and resource-draining aspects of software provisioning and management, among other associated tasks.
This is good news for those organizations that have server staff and human know-how shortages. Through 2024, shortcomings in critical skills creation and training efforts by IT industry leaders will prevent 65% of businesses from achieving full value from cloud, data and automation investments, according to IDC statistics.
If Red Hat Ansible Lightspeed does what Red Hat claims it does — and there is no reason to doubt that it does — the general availability of its enterprise version of Red Hat Ansible Lightspeed could significantly reduce the burden of numerous tasks associated with software provisioning and management with Ansible as infrastructure as code.
IT automation, thanks to Large Language Models (LLMs) and other resources provided by IBM‘s Watson, as well as Red Hat’s implementation, along with Ansible’s long-standing role as a leading infrastructure as code enabler, will play a pivotal role. This GA follows IBM’s announced purchase of Red Hat in 2018 while Red Hat bought Ansible in 2015.
Specifically, this new Red Hat IT automation tool — although not promoted as an Infrastructure as Code (IaC) offering per se in its marketing texts — is referred to by Red Hat as an “IT automation tool.” Beyond automating certain processes without human input, it can also initiate and orchestrate actions derived from an Ansible playbook, referred to as a “Runbook.”
As described in Red Hat’s documentation, Ansible Lightspeed with watsonx Code Assistant serves as an AI experience for Ansible content creation. The system draws from piping into automation-specific IBM watsonx foundation models in order to translate text prompts into Ansible content snippets for the creation of Ansible content. The generated content adheres to accepted Ansible best practices and when combined with the Ansible code bot feature, teams can build more confidence in their automation code base, Red Hat says.
The service consists of three components:
- The developer interface: This interface is built natively into the VS Code extension via the Ansible extension. This allows Ansible content creators to use natural language prompts in the Ansible Playbooks or task files to generate Ansible Lightspeed single and multitask suggestions.
- The integrated service: This acts as the glue or broker between the developer interface and the watsonx.ai service. It brings the power of AI to Ansible Automation Platform and enhances the responses from the AI with its post-processing capabilities.
- The AI: IBM watsonx Code Assistant provides access to Ansible-specific watsonx.ai foundation model that generates Ansible content recommendations. This is the “AI guts” of the solution.
Red Hat Ansible Lightspeed, as mentioned above, is a culmination of development between groups developing automation-specific IBM watsonx from IBM’s famous Watson project, Red Hat Ansible Lightspeed leverages watsonx for training. “We really leverage the Watson next training and model serving components and but on, from our perspective on the Red Hat Ansible side, we’re the ones collecting the components, making sure it’s trained correctly,” Matthew Jones, chief architect of Ansible Automation for Red Hat, told The New Stack during an interview.
Unlike other general-purpose AI systems, Red Hat Ansible Lightspeed’s development was intended to produce “something very targeted and practical,” Jones said. “It might sound like marketing, but it’s precisely what we’re doing. We’re primarily concerned with producing hands-on content,” Jones said. “You’re not going to write a book report or something similar using our product. We are the definitive experts in Ansible content and Ansible authoring tools, and we’re well-equipped to curate this content.”
The goal is to ”make experts out of anyone who wants to write code, as they leverage our expertise,” Jones said. “While you can certainly obtain Ansible content from Copilot or similar products, we understand the best practices and features that make for good Ansible code, and we are capable of producing it,” Jones said.
The results from the Red Hat Ansible Lightspeed language model are highly referenced. When inferences are generated, such as “You asked me to manage this Azure resource group; here’s the inference and the underlying code,” the source repositories from where the recommendation was provided are communicated.
This allows the user to review the recommendation’s origin on GitHub and understand how the Playbook information was compiled. With it, the user can verify the recommendation’s context, such as recommended Azure resource group names and permissions, by examining the provenance, Jone said.
‘In the Lightspeed module, we not only offer inferences and ask you not to trust us blindly but also provide documentation details. We’ll guide you to the relevant documentation section, highlighting the required fields and further details,” Jones said. “This approach empowers you to cross-check and understand the context thoroughly.”
The IaC Component
Infrastructure as Code (IaC) represents a “crucial” component in the way Playbooks allow users to “create automation,” Jones said.
A Playbook is always tailored to a specific objective. For instance, it may be designed to deploy an application on a virtual machine or deploy pods on Kubernetes. Infrastructure as code, on the other hand, emphasizes the composability of components. A customer may have 5,000 applications to deploy and manage, all of which need to be deployed on a specific database, such as SQL Server, Postgres or MySQL, as mandated by the organization.
It is possible to instruct the Playbook to deploy to the different databases as indicated, while the automation to deploy the database may have already been created. For instance, an admin whose name is “Fred” might have already used Lightspeed to create a role that installs and configures PostgreSQL, which is taken into account, Jones said.
“In the scenario of a large enterprise with 2500 automation developers, all working with Ansible, they don’t need to rewrite tasks to install and configure the database. These tasks have already been addressed. Fred’s role, for example, takes care of PostgreSQL installation and configuration,” Jones said. “Therefore, when a developer is working on a playbook and needs to interact with the database, the language model should be capable of suggesting, ‘here’s the role you should import to handle this.’”
What About The Human?
With LLM solutions and comprehensive IT automation, the perennial question always arises: to what extent can you trust the machine and when and how should humans assume control? This consideration is particularly relevant to Red Hat Ansible Lightspeed. How can humans be certain that every option and component has been selected correctly and that all the required configurations, especially in terms of security and policy, have been properly set up when provisioning and managing infrastructure, especially at scale?
The response, as Jones stated, is that “you should never blindly trust what comes out of language models or AI suggestions.”
“When we designed the system, we didn’t merely plug the user into the language model. That’s not sufficient. We’ve spent the last two or three years building a developer tools system, in which Lightspeed is a key component,” Jones continued. “It includes tools within an Ansible development environment, with VS Code as the primary starting point for development. However, you’re not restricted to using VS Code; you can choose your preferred environment. This toolset encompasses more than just Ansible; it also includes our testing infrastructure using Molecule and Ansible Test. We provide you with this infrastructure, which allows you to test and validate that the output of the language model aligns with your expertise and intended results.”