CAST AI sponsored this post.
Technically, containerization should be more cost-effective by default, but Kubernetes is riddled with expensive cost traps that may cause you to go over your budget. Fortunately, you have a few tactics to keep cloud costs at bay, and autoscaling is one of them. Kubernetes comes with three built-in autoscaling mechanisms to help you do that. The tighter they’re configured, the lower the costs of running your application.
Keep on reading to learn how these autoscaling mechanisms help to reduce your AWS bill for Kubernetes.
1. Horizontal Pod Autoscaler (HPA)
Many applications experience fluctuating usage, which means that adding or removing pod replicas is in your best interest. This is where Horizontal Pod Autoscaler (HPA) helps by doing that automatically.
When to Use HPA?
It works great for scaling stateless applications, but is also a good match for stateful sets. To get the highest cost savings for workloads where demand changes regularly, use HPA together with cluster autoscaling. This will reduce the number of active nodes when the number of pods decreases.
How Does HPA Work?
HPA monitors pods to understand whether the number of pod replicas needs to change. To determine this, it takes the mean of a per-pod metric value and checks whether removing or adding replicas would bring that value closer to the target.
For example, if your deployment’s target CPU utilization is 50% and right now you have five pods running there, your mean CPU utilization is 75%. To bring the pod average closer to your target, the HPA controller will add three replicas.
HPA Best Practices
- Provide HPA with a source of per-pod resource metrics: You need to install metrics-server in your Kubernetes cluster.
- Configure values for every container: HPA makes scaling decisions based on the observed CPU utilization values of pods, a percentage of resource requests from individual pods. If you fail to include values for some containers, the calculations will be inaccurate and lead to poor scaling decisions. So configure these values for every single container in every pod part of the controller scaled by HPA.
- Use custom metrics: Another source of HPA’s scaling decisions is custom metrics. HPA supports two types of custom metrics: pod metrics and object metrics. Make sure to use the right target type. You can also use external metrics from third-party monitoring systems. (Note that securing an external metrics API might be more challenging.)
2. Vertical Pod Autoscaler (VPA)
This autoscaling mechanism increases and reduces the CPU and memory resource requests of pod containers to align the allocated cluster resources with actual usage. VPA also needs access to the Kubernetes metrics server since it replaces only pods that are managed by a replication controller.
Tip: Use VPA and HPA at the same time if your HPA configuration doesn’t use CPU or memory to set its scaling targets.
When to Use VPA?
A workload might experience high utilization at one point or another, but increasing its request limits permanently is a bad idea. You risk wasting CPU or memory resources and limiting the nodes running them. Spreading a workload across multiple application instances is tricky; this is where Vertical Pod Autoscaler helps.
How Does VPA Work?
VPA deployment consists of three components:
- Recommender: Monitors resource utilization and calculates target values
- Updater: Checks whether pods resource limits require updating
- Admission Controller: Overwrites the resource requests of pods when they’re created
Since Kubernetes doesn’t allow making changes in the resource limits of running pods, VPA first terminates pods using outdated limits and then injects the updated values to the new pod specification.
VPA Best Practices
- Avoid using VPA with Kubernetes versions older than 1.11 or use version 0.3 of VPA.
- Run VPA with updateMode: “Off” to understand the resource usage of the pods you’re looking to autoscale. This will give you the recommended CPU and memory requests, which are a great foundation for later adjustments.
- If a workload experiences regular spikes of high and low usage, VPA might be too aggressive because it might keep on replacing pods over and over again. HPA works better in such scenarios.
3. Cluster Autoscaler
Cluster Autoscaler alters the number of nodes in a cluster on supported platforms. Since the autoscaler controller works on the infrastructure level, it needs permissions to add and delete infrastructures, and you should manage these credentials securely (for example, following the principle of least privilege).
When to Use Cluster Autoscaler?
This autoscaling mechanism works well if you’re looking to optimize costs by dynamically scaling the number of nodes to fit the current cluster utilization. It’s a great tool for workloads designed to scale and meet dynamic demand.
How Does Cluster Autoscaler Work?
It checks for unschedulable pods and then calculates whether it’s possible to consolidate all of the pods deployed currently to run them on a smaller number of nodes. If Cluster Autoscaler identifies a node with pods that can be rescheduled to other nodes in the cluster, it evicts them and removes the spare node.
Cluster Autoscaler Best Practices
- When deploying Cluster Autoscaler, use it with the recommended Kubernetes version. (Here’s a handy compatibility list).
- Check whether cluster nodes have the same CPU and memory capacity: Cluster Autoscaler won’t work otherwise because it assumes that every node in the group has the same capacity.
- Make sure that all the pods scheduled to run in a node or instance group for autoscaling have specified resource requests.
Why Automating Kubernetes Scaling Is a Good Idea
These native autoscaling mechanisms are incredibly valuable for keeping cloud costs at bay, but they require significant manual configuration:
- Preventing HPA and VPA clashes: You need to check whether your HPA and VPA policies end up clashing. Keep a close eye on costs to prevent them from getting out of hand.
- Diversified allocation and spot instances together: Adopting a diversified allocation strategy and using spot instances are two powerful cost-saving activities that are hard to coordinate manually.
- Balancing all three mechanisms: You need a balanced combination of all three mechanisms to ensure that workloads support peak load and keep costs to the minimum during times of lower demand.
You probably see why automating this aspect of running Kubernetes clusters is a smart move. Just to give you an example, tools such as CAST AI can add new nodes automatically for the duration of the increased demand and then scale down immediately to reduce waste.
Here’s an example of what an automated autoscaling flow looks like:
- When the application experiences a surge of traffic, Horizontal Pod Autoscaler creates new pods. But there’s no place to run them, so we need 15.5 new CPU cores.
- CAST AI automatically adds a new 16-core node within two minutes.
- But look what happens at 15:45: Even more traffic hits the application.
- To make it work, CAST AI adds an extra eight-core node within one minute.
- Once the traffic is gone, the platform instantly retires two nodes to help avoid resource waste. CAST AI chose spot instances for the additional work at a 70% discount to drive costs down further.
Amazon Cloud is a sponsor of The New Stack.
Featured image via Pixabay.