Primer: A Developers Guide to Deploying to Kubernetes

12 Mar 2020 9:52am, by

Kubernetes was founded to help developers ship and scale their applications. We know things aren’t done until they’re delivered. So to help you get there, here is a quick 411 on what developers need to know to deploy their applications to Kubernetes.

Getting into the Container:

Containers package your application and everything it needs to run. This includes any application dependencies or files and the runtime environment for your application.

Containers, as the name suggests, are portable. You can spin up containers in different environments and different infrastructure without hearing, “well, it works on my machine.” And so that is also the main difference between containers and VMs. VMs are like ships, and containers are like boats. You can put a boat on a ship, but you wouldn’t be able to put a ship on a boat.

VMs vs Containers

Tiffany Jachja
Tiffany Jachja is a Developer Advocate at Harness. Before joining Harness, Tiffany was a consultant with Red Hat's App Dev consulting practice. There she used her experience to help customers build their software applications living in the cloud. In her spare time, she likes to go on walks with her cat Rico and blog about self-development.

A container is a running instance of an image. A container image contains the source code, libraries, and dependencies of your application. If your app uses ssh, then you would ensure the container image has ssh installed. Images are like templates, and you can add layers to them to add additional functionality to your containers.

Container images are stored in a registry that is on a public or private repository, like the Docker Hub. Typically you pull down an image from a repository or build your own container image and use that to spin up a container.

The ecosystem of container technologies continues to grow. Docker and Buildah are two technologies that help build lightweight containers for your applications to scale effectively across your organization. Here’s a list of just 30 different container technologies from TechBeacon.

Shipping the Containers

Kubernetes is a tool used to orchestrate containers and manage them. Container platforms also help build, deploy, and manage your containers. In such an ecosystem, you need to know a few extra things when working with containers.

Pods group your container(s) to allow them to run on your infrastructure. Pods are the smallest deployable unit that’s created and managed by Kubernetes. Kubernetes will assign pods to run on machines managed by Kubernetes called nodes.

Pods can contain a single container (this is the most common use case) or have multiple containers. A container deployed into the same pod is called a sidecar container. The sidecar pattern ensures that containers share the same set of resources at the pod level.

A DaemonSet deployment often gets considered and compared to the sidecar pattern. A DaemonSet is a Kubernetes resource that ensures an instance of a pod runs on all nodes in a cluster. The DaemonSets pattern consumes fewer resources than the sidecar pattern because you have only one instance per node.

Because you have pods that act as a wrapper for your container(s), you can make pods addressable using a Kubernetes Service. You can configure your container platform to specify how you want to build your images, deploy and operationalize your application.

What Happens After?

There are a couple of ways to ensure that your application performs well. Health checks provide liveness check and readiness check functionality. A liveness probe checks if a container is running. A readiness probe determines if a container is ready to service requests. The readiness probe can be configured to do an HTTP check. This way you’ll know that the service is ready to receive traffic. Check your Kubernetes documentation for the probes available.

My final guidance is related to performance. Each container running on a node consumes compute resources. If you notice a slow down in performance or capabilities, ensure your pods have enough CPU resource and memory resource units allocated. Every pod is limited to how much memory and CPU it can consume while on a node. And in some cases, pods are terminated if they’ve surpassed a memory limit. A typical example of resource negligence is having an instance of Jenkins that is performing builds slowly because it has a CPU limit 500m CPU or half a core.

Conclusion

The future of technology will involve easy to use tools and platforms built on top of Kubernetes for developers to develop and deploy code quickly. With this in mind, it’s useful to understand proper Kubernetes terminology and concepts, even if you’re not provisioning or deploying your applications as a developer. To learn more about container-native application development consider checking out Twelve-Factor Apps. If you are interested in learning more Kubernetes concepts check out this post.

Feature image via Pixabay.

This post is part of a larger story we're telling about Kubernetes.

Notify me when ebook is available

Notify me when ebook is available