Kubernetes Isn’t Always the Right Choice
These days, you can encapsulate virtually any application in a container for execution. Containers solve a lot of problems, but they introduce a new challenge of orchestration. Because of the growing need for container orchestration from a huge number of teams working to build cloud native applications, Kubernetes has gained significant popularity as a powerful tool to solve that challenge.
Building in a well-managed Kubernetes environment offers numerous benefits such as autoscaling, self-healing, service discovery and load balancing. However, embracing the world of Kubernetes often implies more than just adopting container orchestration technology. Teams need to strategically consider, “Is Kubernetes the right choice for my solution?” And they must do so by evaluating several components of this broader question.
Is My Team Composition a Fit for Kubernetes?
There’s no shortage of articles praising the capabilities of Kubernetes (K8s), and that’s not what we aim to dispute. K8s is the right choice in many cases. That said, direct interaction with and maintenance of K8s isn’t appropriate for all teams and projects.
- Small startups with cloud native applications: These teams will find direct management of Kubernetes to be a complex, time-consuming distraction from their goal of releasing and scaling a product. Given their size, the teams will not have the bandwidth to manage Kubernetes clusters while also developing their application.
- Enterprise teams with a variety of application types: For larger teams with specialist skills, Kubernetes is an excellent choice. However, fully managed container runtimes or Kubernetes as service offerings should still be considered. These services allow limited DevOps resources to focus on team productivity, developer self-service, cost management and other critical items.
- Midsize companies with a DevOps culture: While these teams are more prepared for a move to Kubernetes, it’s a major project that will disrupt existing workflows. Again, managed offerings unlock many benefits of Kubernetes without significant investment.
- Software consultancies: While these teams are adaptable, relying on Kubernetes can limit their ability to serve clients with different needs, as it pushes the consultancy toward recommending it even when it’s not the best fit.
How Complex Is My Project? Is K8s Overkill?
Rather than determining whether K8s meets some of your requirements, consider identifying specific characteristics and requirements that do not align well with capabilities of Kubernetes or introduce unnecessary complexity.
- Minimal scalability needs: If the project has consistently low traffic or predictable and steady resource demands without significant scaling requirements, Kubernetes will introduce unnecessary overhead. In these cases, managed container runtimes or virtual private server (VPS) solutions typically represent better value.
- Simple monolithic applications: If the project is a monolithic application with limited dependencies and doesn’t require independently scalable services or extremely high instance counts, Kubernetes is too complex for its needs.
- Static or limited infrastructure: If the project has small or static infrastructure without much variation in resource usage, then simpler deployment options such as managed services or VPS will suffice.
- Limited DevOps resources: Kubernetes requires expertise in container orchestration, which is not feasible for projects with limited DevOps resources or if the team is not willing to invest in learning Kubernetes. The benefits of containers can still be achieved without this additional investment.
- Prototyping and short-term projects: For projects with short development life cycles or limited production durations, the Kubernetes overhead cannot be justified.
- Project cost constraints: If the project has stringent budget constraints, the additional cost of setting up and maintaining a Kubernetes cluster will not be feasible. This is particularly true when considering the cost of the highly skilled team members required to do this work.
- Infrastructure requirements: Kubernetes can be resource-intensive, requiring robust infrastructure to run effectively. If your projects are small or medium-sized with modest resource requirements, using managed services or serverless is far more appropriate.
The complexity of your requirements alone won’t determine whether Kubernetes is perfect or excessive for your team; however, it can help you lean one way or the other. If you’re using Kubernetes directly, it won’t inherently elevate your product. Instead, its strength lies in crafting a resilient platform on which your product may thrive.
The consequences are that the development efforts toward your product will shift further away from being the foundation of your business the more you commit to laying your own work underneath it.
This unearths the real question: Are we building a platform or are we trying to expedite our time to market with more immediate return on investment for our core business objectives?
Do We Have the Necessary Skill Set?
Kubernetes is often recognized for its challenging learning journey. What contributes to this complexity? To offer clarity, I’ve curated a list of topics based on specific criteria that help gauge the effort needed to improve one’s skills.
|Basic||Fundamental, easier concepts|
|Intermediate||Concepts needing some pre-existing knowledge|
|Advanced||Complex concepts requiring extensive knowledge|
Note: These complexity levels will vary based on individual background and prior experience.
|Containerization||Understanding of containers and tools like Docker.||Basic|
|Kubernetes architecture||Knowledge about pods, services, deployments, Replicasets, nodes and clusters.||Intermediate|
|Kubernetes API and Objects||Understanding the declarative approach of Kubernetes, using APIs and YAML.||Intermediate|
|Networking||Understanding of inter-pod communication, services, ingress, network policies and service mesh.||Advanced|
|Storage||Knowledge about volumes, persistent volumes (PV), persistent volume claims (PVC) and storage classes.||Advanced|
|Security||Understanding of Kubernetes security including RBAC, security contexts, network policies and pod security policies.||Advanced|
|Observability||Familiarity with monitoring, logging and tracing tools like Prometheus, Grafana, Fluentd, Jaeger.||Intermediate|
|CI/CD in Kubernetes||Integration of Kubernetes with CI/CD tools such as Jenkins, GitLab and use of Helm charts for deployment.||Intermediate|
|Kubernetes best practices||Familiarity with best practices and common pitfalls in the use of Kubernetes.||Intermediate to Advanced|
For teams that lack the necessary expertise or the time to learn, the overall development and deployment process can become overwhelming and slow, which will not be healthy for projects with tight timelines or small teams.
What Are the Cost Implications?
While Kubernetes itself is open source and free, running it is not. You’ll need to account for the expenses associated with the infrastructure, including the cost of servers, storage and networking as well as hidden costs.
The first hidden cost lies in its management and maintenance — the time and resources spent on training your team, troubleshooting, maintaining the system, maintaining internal workflows and self-service infrastructure.
For various reasons, the salaries of the highly skilled employees required for this work are overlooked by many when calculating the cost of a full-blown Kubernetes environment. Be wary of the many flawed comparisons between fully managed or serverless offerings against self-managed Kubernetes. They often fail to account for the cost of staff and the opportunity costs associated with lost time to Kubernetes.
The second hidden cost is tied to the Kubernetes ecosystem. Embracing the world of Kubernetes often implies more than just adopting a container orchestration platform. It’s like setting foot on a vast continent, rich in features and a whole universe of ancillary tools, services and products offered by various vendors, which ultimately introduce other costs.
A good tool is not about its hype or popularity but how well it solves your problems and fits into your ecosystem. In the landscape of cloud native applications, Kubernetes has understandably taken an oversized share of the conversation. However, I encourage teams to consider the trade-offs of different approaches made viable by solutions like OpenShift, Docker Swarm or serverless and managed services orchestrated by frameworks like Nitric.
In a follow-up post, I’ll explore an approach to creating cloud native apps without direct reliance on Kubernetes. I’ll dig into the process of building and deploying robust, scalable and resilient cloud native applications using infrastructure provisioned through managed services such as AWS Lambda, Google CloudRun and Azure ContainerApps.
This approach to developing applications for the cloud was the inspiration for Nitric, the cloud framework we are building that focuses on improving the experience for both developers and operations.
Nitric is an open source multilanguage framework for cloud native development designed to simplify the process of creating, deploying and managing applications in the cloud. It provides a consistent developer experience across multiple cloud platforms while abstracting and automating the complexities involved in configuring the underlying infrastructure.
For teams and projects that find direct interaction and management of Kubernetes unsuitable, whether due to budget constraints, limited resources or skill set, Nitric provides an avenue to harness the same advantages. Dive deeper into Nitric’s approach and share your feedback with us on GitHub.