Take DevOps Automation to Hyperspeed with Hypermodal AI

Generative AI powered by large language models (LLMs) is in a significant hype cycle — and for good reason. However, this has led to talk of providers simply “AI-washing” their solutions to boost sales, so it’s important to distinguish between hype and true value. While many organizations are just starting to explore the range of potential LLM use cases, DevOps teams are already discovering some highly effective ways of harnessing it to drive value in software delivery.
One of the most exciting possibilities of generative AI is its ability to automate online research to allow developers to find code snippets or guidance on how to remediate a problem. LLMs can either find that information from the public sources of historical data used to train the AI model, as in the case of ChatGPT, or they can reach out to websites and summarize the information to provide a more up-to-date answer, as in the approach used by Bing Chat.
It’s All in the Prompt
If they want to scale up their use of LLMs in software development, DevOps teams need to prompt for answers by providing specific context about their environment. Without this rigor in prompting the AI, its outputs will be vague and generic, resulting in trivial and unhelpful suggestions like “if your CPU is high, buy faster hardware.”
As LLMs are probabilistic in nature, they can’t provide analytical precision and context about the state of systems and the root cause of problems. DevOps teams therefore need other complementary technologies — such as causal and predictive AI — which enable LLMs to include precise answers in a response. Only then can teams drive DevOps automation with confidence, based on responses that contain real-time insights about live systems and accurate forecasting of future scenarios.
Potential Rewards of Generative AI
Generative AI has the potential to become invaluable in increasing DevOps teams’ productivity. It can accelerate many data access, configuration, workflow definition and automation code development tasks in response to prompts that provide enough context. For example, LLMs can generate code snippets from data that’s been taken from sites such as GitHub to train them and scour the web for solutions to common problems that have been answered via developer community portals such as Stack Overflow. Many DevOps teams are already familiar with GitHub Copilot, which generates code as developers create prompts in the form of comments.
These applications of generative AI allow DevOps teams to focus on strategic and high-level tasks such as improving their software architecture and planning new features, rather than reinventing common tasks. If they create a prompt that contains enough detail and context about the real-time state of their IT environment, alongside the relationships and dependencies between its constituent parts, generative AI also enables DevOps teams to remediate problems more quickly when they are discovered.
Clearing the Hurdles of Using Generative AI
There are new challenges that come with integrating LLMs into an organization’s software development toolchain, however. The first is the difficulty of achieving meaningful responses, which requires teams to create prompts that contain detailed context and precision. Once this technical hurdle has been overcome, there is the issue of understanding the implications of intellectual property and licensing restrictions such as GPL (General Public License) code, as LLMs may have been trained on data from open source libraries. This creates a risk that teams could accidentally repurpose proprietary code in ways that contravene these restrictions.
DevOps teams might also prompt LLMs with non-public data, which could inadvertently expose proprietary IP or violate privacy and security regulations, such as GDPR or EU Artificial Intelligence Act. It’s therefore essential they use an LLM that has been purposely built to comply with security and privacy standards.
There’s also the well-known risk of an LLM hallucinating, which sees it creating statements that are inaccurate, inconsistent or even fictional. This is because LLMs cannot distinguish fact from fiction. That challenge becomes especially pronounced when users create a prompt that is vague or falls outside the data that the LLM has been trained on. The AI will therefore generate a response that is coherent but based on fantasy. In a development context, an LLM could end up inventing a new syntax that doesn’t follow the rules of a programming language, resulting in broken code.
Hypermodal AI Holds the Key to Success
These shortcomings mean humans will always need to manually verify insights from generative AI. They will need to cross-reference the LLM’s output against the source materials to validate what is fact and what has been generated. Given the sheer volumes of data and tasks an organization may put before an LLM, manual intervention for every workflow is unrealistic. It also makes it impossible to realize the true potential of generative AI for DevOps automation. However, by combining generative AI with fact-based causal and predictive AI — to create a hypermodal AI — DevOps teams can greatly expedite the resolution of these issues.
Causal AI observes the relationships between components in a system and explains their dependencies and the reasons for their behavior. Predictive AI can enhance this further by analyzing patterns in historical data, including workload trends, seasonal user behavior, system capacity and application health, to pre-empt future problems and suggest ways to prevent them.
DevOps teams can then combine this insight with their prompt to get recommendations on how to remediate issues and even generate a new workflow that serves as a template for automated remediation. In large enterprise IT environments, the context from all this data must be extracted from millions of heterogeneous data points per second. These data modalities include dependencies, user sessions, metrics, traces, logs, code insights, deployment information and many others. Crafting context from all this data is impossible to scale manually, so the process of feeding this into prompts must be automated.
The marriage of LLMs and causal AI can free up DevOps teams to focus on higher-level challenges, such as creating new features, while dramatically reducing development and testing time. Although organizations are already seeing the potential of LLMs to supercharge software innovation on its own, the precision of a hypermodal AI, built on causal, predictive and generative AI, could be the real game changer.