Artificial intelligence and machine learning are expected to have a profound effect on DevOps as a way to harness the brain power of perhaps hundreds or even thousands of humans in a single system in the development and deployment pipeline. But, of course, computer and data scientists are only just beginning to take advantage of the power of AI/ML, which remains largely in the testing phase.
As AI/ML sees further development and begins to play a role in commercial software development at scale, Kubernetes and microservices will almost certainly form the underlying architecture, as machines, for a lack of better words, “take over” many roles within DevOps teams.
Monitoring and observability will also play a major in this brave new AI/ML landscape with Kubernetes and microservices. This was the main theme of a podcast Alex Williams, founder and editor-in-chief of The New Stack, hosted, with Janakiram MSV, The New Stack correspondent and principal of Janakiram & Associates, as the co-host.
Irshad Raihan, director of product marketing at Red Hat, was the guest who spoke about the role of data and observability in AI/ML, in addition to how DevOps is changing for AI/ML, especially with the increasing availability of direct data and data streaming.
Raihan described how AI/ML is not evolving so much as an abstraction ecosystem on top of Kubernetes, but as something completely embedded and integrated into the Kubernetes layers. This, of course, will have a major impact on monitoring.
“In the future, across the Kubernetes infrastructure, AI intelligence will be so embedded in every piece of the Kubernetes ecosystem, the AI functionality will be indistinguishable from Kubernetes in the logic,” Raihan said. “We will not talk about AI apps as a separate module that sits on top, but it will be an assumed feature of what Kubernetes has to offer.”
Monitoring and observability will thus play a major role during the emergence of AI/ML as it is completely embedded in Kubernetes as applications and deployments reach enterprise-scale maturity.
“Using all of these individual AI models together into a system that has thousands of moving parts, requires experienced data scientists. Typically, there are hundreds and thousands of smaller logs, from the control plane all the way into the workloads,” Raihan said. “Those that are sitting on the top are actually not just modern workloads but are also traditional workloads as well. [The merging] of these two worlds is huge from a logging perspective.”
In the present, much of the AI/ML work is being done on a pure R&D level. The concept and concern of “siloing” remain a non sequitur for the moment ahead of the commercial application and deployments as the technology remains in the development, and in many cases, pure research phase. But when the time comes to incorporate AI/ML into working commercial production environments, AI/ML will, of course, then become integrated with DevOps. Kubernetes, as mentioned above, should also play an integral role.
“You have on one end AI and ML engineers doing really cool stuff as they build modules, algorithms, recommendation engines and things like that, and on the other hand, you have AI/ML technologies themselves sort of infiltrating the very bones of Kubernetes,” Raihan said. “In our industry with something as especially new as Kubernetes and containers, but when I use the term ‘fad,’ developers and engineers tend to object.”
In this Edition:
1:10: Monitoring technologies, tools, and exploring data and observability in this context.
10:53: Why is Kubernetes mattering so much?
13:30: Where does that take you in the context of machine intelligence?
17:52: Discussing the trend of AI Ops.
24:39: How code can help solve problems, and why transparency is important.
28:40: How do you see the abstractions sitting on top of Kubernetes, and how do you think that will get us to the point where the AI/ML ecosystem benefits from Kubernetes?
Feature image via Pixabay.