Modal Title
Kubernetes / Observability

How AI and Full-Stack Observability Can Overcome Today’s Kubernetes Challenges

How AI could help monitor Kubernetes,
Feb 17th, 2020 10:34am by
Featued image for: How AI and Full-Stack Observability Can Overcome Today’s Kubernetes Challenges

Dynatrace sponsored this post.

Kubernetes brings real, tangible improvements to the end-user experiences that organizations aim to deliver, which in turn translates into stronger business outcomes. But getting the most value out of your Kubernetes administration requires the capability to cut through the escalating complexity we’re seeing in today’s IT environments. AI-powered full-stack observability is a must-have for any modern enterprise IT team in order to understand how Kubernetes utilizes resources, how deployed containers behave as workloads change and how to optimize configuration and performance accordingly.

Driving Full Stack Observability across Kubernetes with AI

Andreas Grabner
Andreas is a DevOps Activist at Dynatrace. He has over 20 years of experience as a software developer, tester and architect, and is an advocate for high-performing cloud operations. As a champion of DevOps initiatives, Andreas is dedicated to helping developers, testers and operations teams become more efficient in their jobs with Dynatrace’s software intelligence platform.

Much has been written about the critical role that AI has to play in enterprise IT. Kubernetes is no exception, particularly as adoption for containers continues to soar. According to a recent survey of CIOs, 68% of organizations are already using containers, with 86% total expecting to deploy them over the next 12 months.

As Kubernetes and containers become more widespread, IT teams need to arm themselves with the capabilities that ensure these dynamic environments are truly working for them. That means being able to yield precise performance degradation insights from their technology stack. It also means leveraging those insights to quickly identify and remediate the root causes behind these issues before they can affect end users. The more quickly those issues can be resolved, the better for the user experience and any related business outcomes.

AI can provide these sorely missing capabilities. Specifically, an AI engine that can ingest Kubernetes events and metrics, like workload changes, state changes and other critical events. Having that AI is key for IT to better understand the dependencies and relationships that exist across the Kubernetes stack, between clusters, containers, service meshes and the workloads running inside them. Implemented correctly, high-fidelity full-stack observability can help organizations to eliminate their complexity problems and use Kubernetes to drive improved digital transformations and user experiences.

Observability Starts with Building Monitoring into Kubernetes Environments

When we talk about full-stack observability around Kubernetes, though, we must talk about monitoring — specifically, providing monitoring as a built-in capability for Kubernetes environments.

Consider the diversity of technologies that exist in these environments. Different development teams that deploy containers in their Kubernetes cluster may bring their own tooling into the mix. And the larger that cluster gets, the more you’ll see applications running on a slew of different technologies; think JAVA, .net, Node, Go and so on. When you have this diversity of technologies, you also get teams bringing in their own preferred monitoring tools, too.

AI alone can’t crack the full-stack observability issue because the AI algorithms are dependent on the kind of data they’re ingesting. Consequently, IT teams that use their own tooling for monitoring are going to be feeding inconsistent data of varying quality into the AI.

Inconsistent data makes AI less effective; this is the “garbage in, garbage out” conundrum. And it makes it all the more important that a single monitoring tool is built into Kubernetes clusters as a self-service capability, one that both supports the diverse technologies that developers deploy and also provides a baseline level of quality data for the AI to ingest — data that carries the level of detail developers need and the level of system and event monitoring that IT administrators need.

A Checklist for Managing Kubernetes Environments

AI-powered observability provides enterprises with a host of new capabilities to better deploy and manage their Kubernetes environments, and right-scale their Kubernetes applications and microservices in production. These include:

  • Accessing logs from the Kubernetes control plane as well as all deployed containers.
  • Analyzing container resource usage and drilling down into container runtimes currently at work.
  • Discovering, instrumenting and mapping container technologies running inside Kubernetes environments.
  • Empowering application owners to more quickly identify and correct performance degradations and scalability bottlenecks.
  • Optimizing resource management and allocation.

Consider this your Kubernetes checklist. If you can’t implement or support these capabilities, then you’re not getting the full value out of what Kubernetes can provide. More than that, you’re likely facing increased complexity and confusion in your environment, which has knock-on effects that impact everything from the user experience to your ability to correct the issues impairing that user experience.

All the items on this checklist, though, can be added to your Kubernetes toolbox by incorporating a deterministic AI model as part of your technology stack. An AI engine capable of ingesting Kubernetes metrics and events can drive new levels of observability — and actions to take based on those insights — more than what traditional metrics, dashboards or manual IT support can achieve now. Modern enterprise clouds and containerized environments have grown beyond what these legacy solutions are capable of delivering.

AI Lets You Have Your Cake and Eat It Too with Kubernetes

Modern IT organizations often find themselves stuck between a rock and a hard place. On the one hand, Kubernetes offers these organizations a way forward in instrumenting and orchestrating workloads in the more dynamic, cloud-based environments that enterprises operate in today. On the other hand, embracing Kubernetes and this more agile and dynamic way of managing IT workloads also means suddenly embracing new levels of complexity into your lives. This degree of IT complexity makes it that much harder to understand what’s actually happening in your technology stack, from the interdependencies between applications and microservices to diagnosing and resolving the root causes behind performance degradations for end-user experiences.

AI can empower IT teams to thread the needle between these two extremes, building full-stack observability into their Kubernetes platforms, containers and workloads, and helping organizations derive smarter, precise answers from their environments at scale. For today’s enterprise IT, digital transformation means multi- and hybrid-cloud architectures; it means becoming more agile; it means replacing old monolith structures with more dynamic microservices and containerized applications. But it doesn’t have to mean more complexity, too. Combining Kubernetes with AI to leverage Kubernetes events and metrics into comprehensive full-stack observability provides the best of both worlds.

Feature image via Pixabay.

Group Created with Sketch.
THE NEW STACK UPDATE A newsletter digest of the week’s most important stories & analyses.