Listen to all TNS podcasts on Simplecast.
On this episode of The New Stack Analysts podcast, TNS founder Alex Williams is joined by Janakiram MSV, a principal analyst with Janakiram & Associates as well as a regular contributor to The New Stack, and Steve Burton, Vice President of Marketing at Harness.io to discuss not only the effects containers and Kubernetes have had on realizing our DevOps dreams, but also how machine learning is taking it to the next level with the evolution of AIOps.
“In the last five years, DevOps has actually matured. So, we started with VMs, and DevOps was all about provisioning and configuration management and then, eventually CI/CD came in and Jenkins became the front and center of build management and release management, but that entire game was taken to the next level when containers became mainstream,” said Janakiram. “We have evolved. Basically, the current phase is driven predominately by container orchestration managers like Kubernetes that makes it extremely easy to spin up a staging environment or a test environment. And then we have Docker images as the unit of deployment. That fundamentally changes the game.”
While containers may make everything easier to automate and faster to deploy, thereby helping to shorten the DevOps cycle, Burton looks to machine learning for the real efficiency gains.
“If you’re doing continuous delivery, you want your pipeline to run in minutes. So, how do you do that? Machine learning is a way you can condense your deployment pipeline. You can automate it pretty much end to end,” said Burton. “From my perspective, it’s about condensing the delivery pipeline and using machine learning to automate more of the manual tasks that developers and DevOps and SREs typically do themselves.”
This is where AIOps comes into play. AIOps is the application of machine learning to the traditional DevOps pipeline, wherein even more of the process can be automated using machine learning techniques. For example, in addition to automating application deployment, AIOps can analyze a release before it hits production and help determine whether or not the build will succeed, or whether or not a particular microservice will increase latency or work as hoped.
Burton sees machine learning as key to not simply increasing velocity, but rather providing the necessary insight that might be otherwise impossible to deliver in time.
“DevOps is not about how many releases you can do a week. I think there’s a bit of a misinterpretation there,” said Burton. “Log files are not going to tell you the revenue impact of the increase in performance, because the time it takes for developers to get that information is hours. What they need to know in minutes is ‘What was the performance of my app?’ The instant feedback loop is something machine learning can enable. We’re talking billions of metrics, billions of events. Even the smartest DevOps engineers on the planet don’t have enough time to pull it all together. Machine learning can give you those insights very quickly.”
In this Edition:
1:48: Janakiram’s perspective on DevOps, CI/CD, and containers.
7:24: Exploring the concept of AIOps.
14:11: How are you offering this service through Harness?
21:31: Are we really that far from making ML and software delivery work together?
23:23: How long does it take for everything to catch up?
30:34: The key difference between software delivery pipelines and ML model management pipelines.
Feature image via Pixabay.
Harness is a sponsor of The New Stack.
The New Stack is a wholly owned subsidiary of Insight Partners, an investor in the following companies mentioned in this article: Docker.