Where are you using WebAssembly?
Wasm promises to let developers build once and run anywhere. Are you using it yet?
At work, for production apps
At work, but not for production apps
I don’t use WebAssembly but expect to when the technology matures
I have no plans to use WebAssembly
No plans and I get mad whenever I see the buzzword
Kubernetes / Operations / Software Development

Leveraging Namespaces for Cost Optimization with Kubernetes

You can use Kubernetes namespaces to set resource requests and limits to ensure that your clusters have the correct resources for optimal performance.
Dec 23rd, 2022 6:55am by
Featued image for: Leveraging Namespaces for Cost Optimization with Kubernetes
Feature image via Pixabay

Kubernetes is a powerful container orchestration system that makes it attractive to organizations, including its ability to automatically scale containerized workloads and automate deployments. However, the ease of deploying and scaling cloud applications can lead to skyrocketing expenses if not managed correctly. So cost optimization is an important consideration when it comes to running a Kubernetes cluster.

You can manage the costs associated with a Kubernetes cluster in several ways, for example, by using lower-cost hardware for nodes, cheaper storage options or a lower-cost networking solution. However, these cost-saving measures inevitably affect the performance of the Kubernetes cluster. So before downgrading your infrastructure, it’s worth exploring a different alternative. Leveraging namespaces’ ability to organize and manage your resources in Kubernetes is one option that can help your organization save costs.

In this article, you’ll learn about the following:

  • Kubernetes namespaces and their role from a cost optimization perspective.
  • Identifying resource usage in namespaces.
  • Resource quotas and limit ranges.
  • Setting up resource quotas and limit ranges in Kubernetes.
  • Benefits of x-as-a-service (XaaS) solutions with built-in cost optimization features.

Kubernetes Namespaces: What They Are and Why They Are Useful for Cost Optimization

You can think of namespaces as a way to divide a Kubernetes cluster into multiple virtual clusters, each with its own set of resources. This allows you to use the same cluster for multiple teams, such as development, testing, quality assurance or staging.

Kubernetes namespaces are implemented as a set of labels on objects in the cluster. When you create a namespace, you specify a name that identifies it and a set of labels to select the objects that belong to it.

You can use namespaces to control access to the cluster. For example, you can allow developers to access the development namespace but not the production namespace. This can be done by creating a role that has access to the development namespace and adding the developers to that role.

You can also use namespaces to control the resources that are available to the applications that run on them. This is done through resource quotas and limit ranges, two objects discussed later in this article. Setting such resource limits is invaluable in terms of cost optimization because it prevents resource waste and thus saves money. Moreover, with proper monitoring, inactive or underused namespaces could be detected and shut down if necessary to save even more resources.

In short, you can use Kubernetes namespaces to set resource requests and limits to ensure that your Kubernetes clusters have enough resources for optimal performance. This will minimize over-provisioning or under-provisioning of your applications.

Identifying Namespace Resource Usage

Before you can right-size your applications, you must first identify namespace resource usage.

In this section, you’ll learn how to inspect Kubernetes namespaces using the kubectl command line tool. Before proceeding, you’ll need the following:

  • kubectl installed and configured on your local machine.
  • Access to a Kubernetes cluster with Metrics Server installed. The Kubernetes Metrics Server is indispensable for collecting metrics and using the kubectl top command.
  • This repository cloned to a suitable location on your local machine.

Inspecting Namespaces Resources Using kubectl

Start by creating a namespace called ns1:

Next, navigate to the root directory of the repository you just cloned and deploy the app1 application in the ns1 namespace, as shown below:

app1 is a simple php-apache server based on the image:

As you can see, it deploys five replicas of the application, which listens on port 80 through a service called app1.

Now, deploy the app2 application in the ns1 namespace:

app2 is a dummy app that launches a BusyBox-based application that waits forever:

You can now use the command kubectl get all to check all the resources that the ns1 namespace uses, as shown below:

As you can see, by using the kubectl command line tool, you can take a quick look at the activity within the namespace, list the resources used, and get an idea of the pods’ CPU cores and memory spending. Additionally, you can use the command kubectl api-resources --verbs=list --namespaced -o name | xargs -n 1 kubectl get --show-kind --ignore-not-found -n <namespace> to get an idea of how often the resources in the namespace are used:

This command lists the resources in use as well as the activity time of each. It can also help detect some status messages like Back-off restarting failed container, which could indicate problems that need to be addressed. Checking the endpoint activity messages is also useful for inferring when a namespace or workload has been idle for a long time, thus identifying resources or namespaces that are no longer in use and that you can delete.

That said, other situations can also lead to wasted resources. Let’s go back to the output of kubectl top pods -n ns1:

Imagine if app2 was a new feature test that someone forgot to remove. This might not seem like much of a problem, as its CPU and memory consumption are negligible; however, left unattended, pods like this could start stacking up uncontrollably and hurt the control-plane scheduling performance. The same issue applies to app1; it consumes almost no CPU, but since it has no set memory limits, it could quickly consume resources if it starts scaling.

Fortunately, you can implement resource quotas and limit ranges in your namespaces to prevent these and other potentially costly situations.

Resource Quotas and Limit Ranges

This section explains how to use two Kubernetes objects, ResourceQuota and LimitRange, to minimize the previously mentioned negative effects of pods that have low resource utilization but the potential to fill your clusters with requests and resources that are not used by the namespace.

According to the documentation, the ResourceQuota object “provides constraints that limit aggregate resource consumption per namespace,” while the LimitRange object provides “a policy to constrain the resource allocations (limits and requests) that you can specify for each applicable object kind (such as pod or PersistentVolumeClaim) in a namespace.”

In other words, using these two objects, you can restrict resources both at the namespace level and at the pod and container level. To elaborate:

  • ResourceQuota allows you to limit the total resource consumption of a namespace. For example, you can create a namespace dedicated to testing and set CPU and memory limits to ensure that users don’t overspend resources. Furthermore, ResourceQuota also allows you to set limits on storage resources and limits on the total number of certain objects, such as ConfigMaps, cron jobs, secrets, services and PersistentVolumeClaims.
  • LimitRange allows you to set constraints at the pod and container level instead of at the namespace level. This ensures that an application does not consume all the resources allocated via ResourceQuota.

The best way to understand these concepts is to put them into practice.

Because both ResourceQuota and LimitRange only affect pods created after they’re deployed, first delete the applications to clean up the cluster:

Next, create the restrictive-resource-limits policy by deploying a LimitRange resource:

The command above uses the following code:

As you can see, limits are set at the container level for the maximum and minimum CPU and memory usage. You can use kubectl describe to review this policy in the console:

Now try to deploy app1 again:

Then, check deployments in the ns1 namespace:

The policy implemented by restrictive-resource-limits prevented the pods from being created. This is because the policy requires a minimum of 10 mebibytes (Mi) of memory per container, but app1 only requests 8 Mi. Although this is just an example, it shows how you can avoid cluttering up a namespace with tiny pods and containers.

Let’s review how limit ranges and resource quotas can complement each other to achieve resource management at different levels. Before continui