This post, the first in a two-part series, explores the developer experience of Kubernetes. Check back next Monday for the second part.
The mantra has been drilled into developers: Kubernetes can offer many production teams unprecedented opportunities to scale, collaborate and speed at which software is deployed. But as the platforms continue to see its way into adoption, only a minority of organizations have yet to even adopt containers (K8s is a way to orchestrate deployments on containers after all).
But for those organizations that can take advantage of the potential tremendous benefits Kubernetes offers — besides its pod-based conceptional magic and how the underlying structure is so simple to conceive of that many scratch their heads as to why no one thought of it before — caveats exist for developers to heed. Indeed, unfortunately, Kubernetes can become a deployment nightmare among development teams for which it should otherwise be a perfect fit.
In this post, we look at some of the common pitfalls development teams can avoid when shifting to Kubernetes.
Plan Your Toolset in Advance
The shift to Kubernetes is obviously a DevOps-centric project as the move in many ways involves the operations folks at least as much as the developers. For both the developer and operations, a key issue is choosing from a choice of literally hundreds of open source toolsets on offer for Kubernetes platforms. One common thread regarding developer requirements is to avoid any manual steps in production pipelines, as well as having to assume operations-related tasks.
Code is only complete for the developer, for example, when it includes a complete set of automation code for everything from the direct runtime requirements to monitoring critical KPIs, scaling and upgrading the app or service, and defining key parameters to optimally match the application with Kubernetes clusters and Pods, Torsten Volk, an analyst for Enterprise Management Associates (EMA), said. “In short, Kubernetes is a construction kit, not a ready solution, but if you put in the work and avoid quick manual workarounds for ad hoc problems, you are on your way for unlocking a lot of these lost 50% of productivity in your day.”
Indeed, most developers obviously “really just want to get containerized apps up and running,” Joe Duffy, CEO and co-founder of infrastructure-as-code platform supplier Pulumi, said. In fact, helping organizations set up container infrastructures was a main motivation for starting Pulumi. “What’s not obvious at the outset is that although Kubernetes helps to accomplish this, it is a major commitment and is often far more difficult than it first appears.”
As Liles noted, at the end of the day, developers care about whether or not they have a binary file in a container image and want to run anywhere from say one to over 100 copies of it in a cluster or clusters. And that’s largely it. But once again, the “getting started as a developer in this space is not easy,” Liles said.
The underlying frameworks on offer, such as Docker and Microsoft’s Helm have certainly improved to make it easier to create images. But in VMware’s case, it has been developing tool kits to do a better job of helping to get developers set up. “What we’re doing now is we’re looking at ways to create a better Kubernetes tool kit,” Liles said. “And what that kit should do is that when I’m starting Kubernetes, I have a certain set of tools that allow me to be productive instantly” as a developer.
The kubectl Connection
The decoupling and API characteristics for access to microservices are obviously quintessential aspects of Kubernetes, while the kubectl command-line interface is one such tool that you will have to master ahead of time before deploying on Kubernetes. While a solid understanding of developing platforms on and for container, usually by way of Docker, is essential, prepare to know and use kubectl. Indeed, it is hard to learn because it is so powerful — and practical.
Before beginning to rely on the kubectl interface, Rajashree Mandaogane, a software engineer at Rancher Labs, parlayed her engineering-level understanding of and experience with containers to learn how only kube-apiserver communicates with the cluster’s data store etcd. This makes sure that all components get consistent data by querying the etcd through the kube-apiserver. I studied how Kubernetes follows spec vs. status approach, then started writing my own custom resource definitions (CRDs) and custom controllers around them,” Mandaogane said.
However, while kubectl can be used to retrieve specific objects by name, or a list of one kind of object belonging to a namespace or across all namespaces, “You may sometimes want to look for objects having a certain value in a field,” Mandaogane said. The problem is “you can’t execute field-based queries,” Mandaogane said.
As a solution, Mandaogane offered the following tip, which she uses frequently during debugging: Let’s say you want to look for pods with nginx image, you could do kubectl get pods -o yaml | grep nginx. This will give number of pods having that image but not the pod name itself. You can further make the output better by using kubectl get pods -o yaml | grep -A20 -B20 nginx. The flags A and B control the number of lines that’ll be included in the output after and before the line containing “nginx.”
Get Your Git Right
Obvious to developers, the commandment “thou shall know and master a git” holds true for Kubernetes. But since Kubernetes nevertheless remains a relatively new platform, there is work to be done, both in terms of a learning curve for developers and the industry.
“The problem now is that we are currently waiting for our tools to catch up to this new paradigm, much like when Git was first introduced. Kubernetes represents a shift not only in how applications run but how they are written in the first place,” Ashish Kuthiala, director, marketing at GitLab, said. “And any cloud-native development team’s most important question to ask when choosing a hosted Git solution should be: ‘Does this fit our new workflow?’ rather than ‘Can I make it fit our new workflow?'”
- Complexity: Kubernetes is complex in installation, integration, and operation. There is no way around this other than education.
- Automation mishaps: Automation isn’t perfect at understanding the intentions of humans (see 2017 S3 outage), and is core to production application deployments on Kubernetes.
- Stateful applications are difficult as you really don’t want to run your database in a container.
- Storage and networking approaches can (and usually do) vary between cloud providers, leading to portability and resiliency concerns.
- Scaling deployments while under load can be challenging as your risk hard-locking the scheduler.
- Applications need to tie in more health checks to ensure the microservices fabric is intact.
- Many organizations want to add more complexity via service meshes.
History in the Making
Kubernetes can offer development teams tremendous benefits. Developers can take advantage of Kubernetes’ decoupling capabilities and the many APIs that existing for access to microservices that entire teams can work on separately, just to name a few. But again, developers are also waking up to the risks Kubernetes can pose. More caveats — and almost invariably, more security vulnerabilities — will see light in the future, but as they are communicated, Kubernetes will become that much more stable and reliable, and eventually, powerful.
In other words: Stay tuned.
VMware and GitLab are sponsors of The New Stack.
Feature image via Pixabay.