How to Optimize Java Apps on Kubernetes
Java is an object-oriented language that’s been around since the 1990s. Originally designed to make application development easier, consistent upgrades through the years have kept Java up to date and increased its cross-platform capabilities. Consequently, Java is still a favorite for many developers, even in the modern world of containers and Kubernetes.
While Java is a favorite for many reasons, there are also significant challenges related to resource management, such as resource contention. Kubernetes solved part of the resource management problem by addressing the resource contention issues.
Before containers, we would deploy processes onto a machine and give them access to all its memory and CPU processing time. This meant that each process viewed the unused CPU time and memory as its own. It’s like having a bunch of children who all believe they own the playground. Conflicts would frequently occur whenever more than one process claimed more resources to remain operational.
Fast forward to today.
Kubernetes allows us to deploy containers to a machine and restrict the resources that the containers can see and use. You can think of it as subdividing a playroom with walls to ensure each child has access only to their own allocated play area. This allows Kubernetes to have greater control over resource use within a cluster. But key challenges remain in running Java apps efficiently on Kubernetes.
This article will delve into those issues and cover best practices for optimizing Java apps on Kubernetes.
Why Optimizing a Java Application Is Painful
While you can tune a Java application for cost or performance, optimizing is about providing the best tradeoff between these objectives given your specific business goals. In other words, providing the right level of performance to meet service-level agreements (SLAs) at the lowest possible cost. There isn’t a one-size-fits-all solution. For instance, a transaction-focused app would value throughput while a computationally intensive app might put more value on completion time. Therefore, it’s essential to understand the priorities for your application to create the proper blend of cost with performance.
Another factor to consider is that a Java application runs on a Java virtual machine (JVM) to which the deployment allocates memory heap size. We see how this works later in this tutorial.
When deploying a Java application as a microservice without specifying requests and limits, Kubernetes determines the resources to be allocated. The challenge here is that it’s usually very generous with the assigned resources, making our cluster expensive to operate. Suppose we decide to allocate the resources manually. Then we must choose one of these three options:
- Over-provision resources to ensure we aren’t frequently facing out-of-memory or CPU throttling scenarios
- Scale back resources and accept the risks of out-of-memory errors and CPU throttling
- Spend significant time and effort determining the right balance of resources through manual trial and error
What about the point where the resources aren’t too high or too low and the performance is just right? Yes, there is such a point. However, manually getting to that “sweet spot” is like searching for a needle in a haystack. It’s a long, near-impossible task. We would need to configure CPU limits/requests, memory limits/requests, replicas, the JVM heap, garbage collection and many other parameters. And a change in any of these parameters will affect overall resource use and application performance. It’s a balancing act that can quickly get out of hand.
But we don’t have to take risks or guess. Machine learning-based optimization solutions are available that allow us to run experiments on our Kubernetes-based, Java applications and recommend configurations with the best performance at the lowest possible cost.
For the remainder of this article, we’ll walk step by step through the process of optimizing an example Java app using one such solution, StormForge Optimize Pro. Note that this process will be different if you’re using a different optimization solution.
Java Microservice Optimization Demo
This demo explains how to optimize a Java application running in Kubernetes to get the best performance at the lowest possible resource cost. We do this in a non-prod environment using a process of experimentation.
For example, let’s create an experiment that simulates 80 different trials with different parameters, computes the cost and performance level, and then recommends optimal configurations.
Before we start, here’s what we need:
- A Kubernetes cluster; in our case, we will use a minikube 1.26 to install a local kubernetes 1.21 cluster
- Kubectl version 1.21 installed
- Git CLI tools installed
- StormForge CLI
- StormForge Optimize Pro Account
Creating the Cluster and Configuring the Necessary Tools
To create a Kubernetes cluster using version 1.21, run this command:
$ minikube start --kubernetes-version v1.21.0
Set the environment variables by running these three commands separately:
$ mkdir -p $HOME/.kube
$ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
$ sudo chown $(id -u):$(id -g) $HOME/.kube/config
Check the configuration:
$ kubectl get nodes
Download and install the StormForge CLI from the install documentation.
Log in to your StormForge account:
$ stormforge login --url
Verify the API connection:
$ stormforge ping
Initialize StormForge by running this command:
$ stormforge init
Verify that you configured the StormForge Controller properly:
$ stormforge check controller
Creating and Running Our Experiment
In this section, we will download the demo project, create a deployment, and create and run an experiment that consists of 80 trials. After the experiment is complete, we’ll view the recommended configurations and apply one of them.
Before starting our experiment, let’s do one last check.
Jump to your StormForge app and confirm that you added a cluster. It should appear on the cluster page like this:
If the cluster is there, you are ready for takeoff! If you can’t see it, refresh the web page.
Clone the example experiments repo:
$ git clone https://github.com/thestormforge/examples.git
$ cd examples
Create and launch the Java-tuning experiment:
$ kubectl apply -f jvm/experiment.yaml
Check the current status of the experiment by running this command:
$ kubectl get trials -o wide -w
You should now see the experiment status similar to the screenshot below.
When complete, go back to your StormForge app and navigate to the experiment by first clicking on the jvm-reactors application.
Click the renaissance-associated scenario.
Click the jvm.reactors experiment run.
From the experiment results screen, choose the highlighted recommended trial. It should look like this:
Run this command to get the pod name of the trial:
$ kubectl get pods
It should look like this:
After getting the pod name, run this command to view the pod configurations:
$ kubectl describe pod <pod name>
The blue box in the screenshot below shows us the recommended configurations:
Here is a table summarizing the changes that will give the best performance at the lowest possible cost.
Click Export the configuration to download the YAML file containing the optimal configurations. That’s it. You now have the configuration discovered by ML-based experimentation for your Java microservice optimized for duration and cost by tuning memory, CPU, heap size and garbage collection.
The Java programming language has been around for many years and has been used to build powerful apps. However, running a Java application on Kubernetes presents a unique challenge because of the multitude of configuration settings that need to be considered for each unique application. If not correctly configured, the application can fail to run as expected and drastically increase costs.
StormForge uses machine learning to help you find answers to questions such as “What configurations would result in the best performance at the lowest possible price?” Beyond Java, StormForge can optimize any workload on Kubernetes. Use it to maximize the value of your Kubernetes cluster.
Stay tuned for the final article in the Kubernetes resource management and optimization series covering database optimization.