Will real-time data processing replace batch processing?
At Confluent's user conference, Kafka co-creator Jay Kreps argued that stream processing would eventually supplant traditional methods of batch processing altogether.
Absolutely: Businesses operate in real-time and are looking to move their IT systems to real-time capabilities.
Eventually: Enterprises will adopt technology slowly, so batch processing will be around for several more years.
No way: Stream processing is a niche, and there will always be cases where batch processing is the only option.
Cloud Services / Kubernetes

How to Gain Visibility into Kubernetes Cost Allocation

Why murky Kubernetes is dangerous, the best practices for cost control and why you need to control Kubernetes costs before they control you.
Oct 20th, 2022 9:30am by
Featued image for: How to Gain Visibility into Kubernetes Cost Allocation
Image via Pixabay.

In the pursuit of faster innovation, organizations are turning to Kubernetes. The exciting possibilities of open source motivates companies to build in-house Kubernetes environments to make the most of containerization and microservices architectures. However, they are increasingly finding themselves lost in Kubernetes operational details.

Stitching together open source tools is essential to unlocking Kubernetes benefits such as greater flexibility and scalability, but it’s also still very challenging, and there are many tools to consider. Take cost visibility, for example. Without the right tools for visibility into your Kubernetes clusters, you won’t have the appropriate foundation for increased in-depth conversations and challenges centered around more advanced K8s issues. Moreover, the lack of observability often leads directly into excessive cloud spend and surprise bills.

In this article, we’ll review some of the key cost-overrun issues companies often encounter as they build their Kubernetes environments. We’ll talk about why murky Kubernetes is dangerous, the best practices for cost control and why you need to control Kubernetes costs before they control you.

Dangers of Zero Kubernetes Visibility

Many teams don’t have consolidated visibility into their Kubernetes clusters. Their clusters are flying “by instrument” through a storm, leaving pilots in a weaker position to control costs. Visibility across the fleet provides a means to make informed decisions. With limited information, cost management fails. Cost overspend risks become much more significant without the ability to make baseline comparisons.

What Causes Murky Kubernetes

How do these dangers happen? We’ve identified three contributing factors that block Kubernetes visibility in organizations today:

  1. Ad hoc management: Embarking on the Kubernetes journey as a new experiment, some companies begin without setting firm boundaries for their explorations. The Kubernetes world is big, exciting and powerful, so some organizations start treating Kubernetes as a project with uncertain parameters. In the end, unrestricted Kubernetes turns into unrestricted spending. Unfortunately, cost overruns are a likely outcome of any undefined experiment.
  2. Poor processes: Often the culprit is a poor process (or no process) for managing cost overruns. A poor process for managing costs could be better than setting zero cost limitations, though it potentially lacks the relevance necessary to appropriately manage resource usage.
  3. Gaps and silos: Skills gaps and siloed communications cut off companies from the opportunity to have productive conversations around cost management and efficient resource allocation. Today, talent limitations make it difficult to build and keep up Kubernetes environments.

Once a cloud initiative is sabotaged, it’s hard to make up for it, but there are best practices for allocating resources that can help organizations stay on track.

Best Practices for Kubernetes Cost Allocation

One of the best defenses against Kubernetes cost overrun is making best practices the core of your setup and maintenance.

Who’s the Owner?

In some organizations, ownership of specific costs or resources is unclear. Ownership clarifies responsibility and restores passive usage back into active management. For every process and for every cost, “Who’s the owner?” needs a clear answer.

Use a Different Model — Consider Alternatives to Shared Clusters or Dedicated Clusters

All-you-can-eat self-service without a process robs engineering teams of having a framework to make resource decisions. In the end, not having a process encourages usage without thoughtful limitations. As said before, you need a cost allocation process that your teams should be familiar with.

Shared cluster models and dedicated clusters have their own tendencies to greater sprawl and overspend. In most instances, organizations should evaluate requests from application teams and provide dedicated clusters only when necessary. By default, most teams should be using shared clusters.

That said, without allocating shared costs, you lose transparency and accountability. Shared cluster resources should be allocated appropriately so users and applications have the access they need. Otherwise, some applications may starve without resources while others overspend.

If you stay with a shared model, you could still divide costs evenly, proportionally to use or by the same proportion each month. You can even apply different models for different costs — splitting support charges one way and resource charges another, for instance.

One advantage of dedicated clusters is their isolation from one another. This makes it possible to give different teams their own master nodes and resources to use. Although you gain visibility with dedicated clusters, you may also see dramatic cost increases for the same resource usage compared with a shared model.


Cost accountability is the big advantage of chargeback. A chargeback model simply treats IT resources as a supplier and charges department or project budgets for the resources they use. Make each cost center responsible for their own cost overrun, and you will build additional responsibility into the system.

Economics theory teaches us that moral hazard — otherwise known as “freeloading” tendencies — are an ever-present risk in business and in organizations. Whenever there’s a finite, shared resource such as Kubernetes spending, there’s a risk of cost overrun. This model has a successful track record for reducing costs and could be worth considering for your organization.

Embrace FinOps Best Practices and Principles

Practicing FinOps is invaluable because it empowers teams to make the right trade-offs and avoid needlessly shortchanging quality, product delivery or other metrics. By providing real-time visibility to all stakeholders, including application teams, organizations are empowered to plan and take action more effectively, and administrators are in a much better position to respond. Actionable data through FinOps practices allows cross-functional teams to manage cloud spend more holistically.

Conclusion: The Path Forward through FinOps

Since cost overrun happens when best practices aren’t followed, organizations must overcome visibility problems early. FinOps principles provide a path toward sensible Kubernetes cost allocation and brings resource management in line with business objectives.

Effective cost allocation also requires having granular data. Without usage data, you can’t make usage decisions, nor can you adjust practices and processes.

Group Created with Sketch.
TNS owner Insight Partners is an investor in: Pragma.
THE NEW STACK UPDATE A newsletter digest of the week’s most important stories & analyses.