How Developers Can Make It Past Kubernetes’ Infrastructure Hump
One can assume most developers would prefer to spend their talents and creativity designing great code. The rest of the work involved during deployment and post-deployment stages can be less … less fulfilling. When it comes to Kubernetes — its power and magic aside — developers, when moving their production pipeline to a K8 platform will, unfortunately, spend a great deal of time on the not-so-fun stuff. This holds especially true at the beginning stages of Kubernetes adoption, when adapting to its stateless and containerized infrastructure. However, there are also ways to make the switch less painful.
DevOps or Die
While container experts and those who have tested Kubernetes on, for example, their laptops before their organization makes the shift, developers face a learning curve when learning to work with the new infrastructure in a Kubernetes production pipeline environment. This is where a well-working DevOps is especially critical, coordinated with a Git as the repository and the mains means to share access to orchestrate the constant collaboration among all stakeholders. In other words, developers, as well as operations or any other DevOps team member cannot go into this alone without the constant input of all involved.
“For most teams, they need to go in with eyes wide open that a shift to Kubernetes represents a major re-platforming that needs to be embraced throughout the organization,” Joe Duffy, CEO and co-founder of infrastructure-as-code platform supplier Pulumi, said. “For this re-platforming to succeed, teams need to invest in automation that enables developers and infrastructure operators to work better together.”
Developers’ increasing use of the Kubernetes is also part of a long-term trend that has been taking place during the past 15 years “in how we think about both creating development and infrastructure,” which has accounted for the widespread adoption of Git, Ashish Kuthiala, director, marketing at GitLab, said. “Parallel to these developments, Git emerged and established itself as the de facto SCM system for the entire industry, and the workflow tools eventually caught up to satiate developers,” Kuthiala said.
“Kubernetes ‘done right’ can eliminate a big part of developer grievances, but on the flipside, ‘Kubernetes done wrong’ will come back to haunt you with a vengeance” — Ashish Kuthiala
With the newfound ability to spin up resources on-demand, those in the operations space have begun to define their infrastructure as code, allowing for changes to be tracked in the same way a developer would track a code change, Kuthiala said. “However, with the infrastructure community, fractured as it is, it is almost impossible to reach a consensus on what best practices are in that space, resulting in a myriad of choices and philosophies to consider,” Kuthiala said. “Kubernetes is meant to be the unifying factor among all of this confusion: an infrastructure abstraction layer which allows for maximum portability with minimal change, that is, as long as you have already adopted the microservices approach.”
The Tool Chest
Making the right choice for toolsets, of course, is a major challenge. The right choices will also vary, according to existing infrastructure, in-house expertise and other factors. When migrating an existing monolith application, for example, the monolith application needs to be “sliced into multiple microservices first,” Nico Meisenzahl, a senior consultant at panagenda who also writes for GitLab, said. While recommending applying the now-famous 12-Factor App method, any supporting functions, such as log management, proxies or init scripts should not be part of your application container image, he said. Instead, he recommends more Kubernetes-specific processes, such as Sidecar- and Init-Container, which run these processes in the same pod. Tools like Telepresence, Tilt and Skaffold are also worthwhile to consider to “develop directly on Kubernetes,” Meisenzahl said.
Kubernetes’ statelessness also makes deployments even trickier, calling for certain tools to help. Rajashree Mandaogane, a software engineer at Rancher Labs, for example, discovered and now relies on Helm’s “Chart Development Tips and Tricks” to help “solve this issue.” “Since pods are ephemeral, you want to launch a deployment that manages the pod’s lifecycle, Mandaogane said. The pods defined in a deployment should be updated only when deployment’s podSpec changes, Mandaogane said. “I was once working on a project that required me to insert config files in a pod using config map,” Mandaogane said. “After some time I had to update the config map. But that didn’t affect the deployment’s podSpec in any way so the pods didn’t get the updated values.”
Using the Helm-recommended process ‘ensures the deployment’s pods get updated on updating the config map or secret being referenced and is pretty helpful,” Mandaogane said.
Container Structures Are Better, But…
In a move to removing some of the more mundane complexities, Docker developers have recently made some huge improvements to, among other things, runtime and container repositories. But while the Docker runtime and container repository has made it easier to create new development and test environments and to generally package new code, as mentioned above, there is still a need “to worry about testing this code under production conditions and ultimately supporting a production environment that comes with different runtime parameters compared to dev and test,” Torsten said.
But while the Kubernetes scheduler certainly offers an excellent deployment target for Docker containers and has all of the levers and capabilities to overcome these differences between application environments, the Kubernetes platform is also much less forgiving when it comes to developers and also operators cutting corners to “quickly get the code pushed out on time,” Torsten said.
“The key lesson here is that Kubernetes ‘done right’ can eliminate a big part of developer grievances, but on the flipside, ‘Kubernetes done wrong’ will come back to haunt you with a vengeance, as Kubernetes environments are so much more dynamic compared to VMs that finding the needle in the haystack is exponentially more difficult than in a Hypervisor environments. As a developer, this means that code is only complete when it includes a complete set of automation code for everything from the direct runtime requirements to monitoring critical KPIs, scaling and upgrading the app or service, and defining key parameters to optimally match the application with Kubernetes clusters and Pods,” Torsten said. “In short, Kubernetes a construction kit, not a ready solution, but if you put in the work and avoid quick manual workarounds for ad hoc problems, you are on your way for unlocking a lot of this lost 50% of productivity in your day.”
Ultimately, at the end of the day, developers, of course, will continue to think of Kubernetes as a means to an end on which code is developed, while hopefully, the infrastructure challenges developers face when getting started will eventually become a lot easier to overcome. “Whenever you have an application that makes some sort of income or does something positive for your bottom line, you don’t really care about what the underlying platform is. And on an application developer level, you don’t really care what that process is, either — you just care that your application can do whatever it needs to be successful,” Bryan Liles, a senior staff engineer for VMware, said. “So, what that means is that when developers are using Kubernetes, their mindset isn’t for solving all these operational problems — not that they’re unimportant — it’s more of a distraction.”
GitLab and VMware are sponsors of The New Stack.
Feature image via Pixabay.