How AI Observability Cuts Down Kubernetes Complexity
The Kubernetes era has made scaled-out applications on multiple cloud environments a reality. But it has also introduced a tremendous amount of complexity into IT departments.
My guest on this episode of The New Stack Makers podcast is Andreas Grabner from software intelligence platform Dynatrace, who recently noted that “in the enterprise Kubernetes environments I’ve seen, there are billions of interdependencies to account for.” Yes, billions.
Dynatrace: Andreas Grabner – How AI Observability Cuts Down K8s Complexity
Grabner, who describes himself as a “DevOps Activist,” argues that AI technology can tame this otherwise overwhelming Kubernetes complexity. As he put it in a contributed post, “AI-powered observability provides enterprises with a host of new capabilities to better deploy and manage their Kubernetes environments.”
During the podcast, we dig into how AI — and automation in general — is impacting observability in Kubernetes environments. To kick the show off, I asked Grabner to clarify what he means by “AI observability.”
“We call it a deterministic AI,” he replied, “and what that really means is, at the core, it’s about capturing a lot of data [from] a lot of different data silos, and then you need to figure out how can I put [that] data on dashboards and make sense out of it. What we mean by ‘AI observability,’ or maybe let’s better call it ‘deterministic AI observability,’ is how we can connect the data with contextual information.”
Grabner pointed out that data in a Kubernetes environment can come from a lot of places — the host, pods, containers, applications and other services. The challenge is to identify how all of this data works together.
So how does Grabner’s definition of AI observability compare to the standard definition of observability — the three pillars of metrics, logs and distributed traces?
After first noting that Dynatrace does the three pillars as well, Grabner explained that AI observability adds a contextual layer on top of this.
“What’s missing and what we are adding is context information about the full-stack, meaning which service runs in which port [and] on which particular host. So we not only have what I call the horizontal dependency, with distributed tracing, we also have the vertical dependency.”
Also in the podcast, we discuss AI observability use cases for operators and developers, what value telemetry data brings to operations teams managing Kubernetes, and the cultural changes in development teams during the Kubernetes era (a topic Grabner is particularly passionate about, as a DevOps Activist).
At this time, The New Stack does not allow comments directly on this website. We invite all readers who wish to discuss a story to visit us on Twitter or Facebook. We also welcome your news tips and feedback via email: email@example.com.