Cloud Native / Kubernetes / Contributed

KubeCost: Monitor Kubernetes Costs with kubectl

21 May 2021 9:20am, by

Michael Dresser
Michael Dresser is a full-stack engineer at Stackwatch, creator of Kubecost. Dresser was previously at Google where he contributed to the Kubernetes project.

You already know you can take full control of all resources in a Kubernetes cluster using the kubectl client. There is a new open source kubectl plugin that enables kubectl to monitor costs now too. The cost plugin allows every engineering team to quickly determine the cost and efficiency for any Kubernetes workload in tandem with the open source Kubecost application.

Modern cloud infrastructure is increasingly complex, challenging teams beyond just operations and engineering. Financial controllers are under extreme pressure to allocate costs in order to monitor and improve financial performance of teams. They turn to engineering teams for answers. The collaboration between finance, operations and engineering enhances visibility of modern cloud workflows. As technology evolves, so does corporate culture. The cost plugin for kubectl is an answer to the challenges of modern enterprise infrastructure.

Shining Light on Infrastructure Costs

Kubernetes clusters are often shared across teams, microservices, applications, and even departments, making infrastructure simpler to manage. With a shared platform, teams often use labels and/or namespaces to organize deployments. A Kubernetes namespace is a logical separation inside a Kubernetes cluster which could be assigned to a particular team, application, or even a business unit.

Most organizations map a namespace to a specific workload type or purpose. For example, the fictitious e-commerce company VeryCoolStore runs a cluster with one namespace for monitoring and one namespace for logging for use by their DevOps teams who maintain the cluster. The customer-facing web frontend application, the search and the product suggestion applications are hosted in that same cluster in different namespaces.

Creating these logical divisions inside a cluster is convenient but doesn’t solve all problems. First, it still doesn’t allow accurate measurement of resource usage and allocation of costs to each tenant based on detailed billing data. More importantly, it doesn’t expose inefficiencies or wasted resources.

Waste is a huge problem whose effects pile up all the way to the final consumer, often with large impact on the price of units sold. No wonder that reducing waste is a corporate mandate for many managers.

To discover waste and improve efficiency, you need the appropriate tools. A Kubecost user leading an SRE team was constantly seeing a service fail. They noticed that service was kicked out of the cluster by lack of resources. With Kubecost reports and kubectl cost they discovered the root cause: an application was requesting 30 pods that went largely unused.

The resources allocated for this overprovisioned application were often forcing the scheduler to kick out other applications, making them fail. Reviewing cost and efficiency reports highlighted that those 30 pods were unnecessary. Once the application was reconfigured to demand one pod instead of 30, things improved immediately. And the engineering team was happy to discover that one pod was indeed enough.

How to Use Kubectl Cost

The cost plugin can be installed in minutes. If you use Krew, just type the following:

kubectl krew install cost

Alternatively, check the installation instructions on GitHub for different options.

There are a number of supported subcommands, including the following:

  • namespace
  • deployment
  • controller
  • label
  • pod
  • tui (to be covered in a future post!)

Each subcommand by default displays the projected monthly cost based on the activity during the window. There is also a non-rate display mode (–historical) that shows the total cost for the duration of the window.

How to Get the Most out of Kubectl Cost

In any complex environment, it’s important to understand what drives costs and potential increases. Finance controllers have developed sophisticated models to assign costs in different scenarios: from factory production lines to hospital wards, the finance teams and operations work together with the objectives to discover better ways to use their resources. With Kubecost, it’s possible to find inefficiencies and improve team performance, both in financial and operational terms.

Most engineering teams organize costs in Kubernetes by namespace or label. For example, the same VeryCoolStore company of before may check the cost of the web frontend by querying that namespace.

And to monitor the cost of each application in the cluster, as denoted by the “app” label:

kubectl cost label -l app

Measuring Spend Efficiency and Why It Matters

Spend efficiency in Kubecost is defined as the percentage of requested CPU & memory dollars utilized over the measured time window. Values range from 0 to above 100 percent. For example, consider the table below representing a namespace with two pods. Pod #1 runs on an on-demand node with a more expensive CPU; pod #2 runs on a  spot node.

CPU Request CPU Monthly Cost CPUs Used Utilization Cost of Used Cost-weighted Efficiency
Pod #1 1.0 $20 0.20 20.0% $4 20.0%
Pod #2 1.0 $2 0.80 80.0% $2 80.0%
Total 2.0 $22 1.00 50.0% $6 25.5%

The resulting efficiency for CPU cost is a mere 25.5% and the CPU utilization is 50%. This measurement is important because it clearly shows the areas where it’s worth focusing on to improve spending.

To investigate further, let’s continue with the namespace example started above. Imagine one namespace is costing much more than expected and/or has very low efficiency. Next, we can drill into the namespace and see the cost and efficiency of every single workload:

This shows you cost and efficiency for each deployment, replicaset, job, etc:

Finally, we can drill into the cost details with the -A flag. This shows all components of cost to fully understand what is driving total costs.

kubecost cost controller -n kube-system -A

Each resource type can now be tuned for your business. Most of our customers aim for utilization in the following ranges:

  • CPU: 50%-65%
  • Memory: 45%-60%
  • Storage: 65%-80%

Target figures are highly dependent on the predictability and distribution of your resource usage (e.g. P99 vs median), the impact of high utilization on your core product/business metrics, and more. Finding the ranges that work for you is a matter of balancing some trade-offs: too low resource utilization is wasteful; too high utilization can lead to latency increases, reliability issues, and other negative behavior. Looking at historical data can help strike the right balance.

Kubernetes keeps expanding its reach and while growing, it poses new challenges to finance and engineering teams. The collaboration is at its infancy but it’s already showing a clear path forward where open source leads the way.

Feature image par Gianni Crestani de Pixabay

A newsletter digest of the week’s most important stories & analyses.