Take DigitalOcean Kubernetes for a Cruise

5 Apr 2019 10:32am, by

DigitalOcean is one of the recent public cloud providers to jump the managed Kubernetes bandwagon. Given its simplicity and minimalistic approach to managing infrastructure, it’s not surprising to see DigitalOcean following the same philosophy for its Containers as a Service (CaaS) offering.

In this tutorial, we will take a look at the workflow involved in launching a DigitalOcean Kubernetes cluster and deploying workloads on it.

Launching the Kubernetes Cluster

At present, DigitalOcean offers two versions of Kubernetes — 1.12 and 1.13. The service is currently in limited availability which means it may not be available in all the regions and is not production-ready.

Let’s go ahead and launch a three-node cluster through the DigitalOcean Control Panel.

It all starts with the selection of a specific Kubernetes version followed by the region.

The next step is to create a node pool with droplets that act as worker nodes. DigitalOcean offers three classes of nodes — standard, flexible, and CPU optimized. Standard nodes are the generic droplets offering basic to advanced configuration. Flexible droplets have a flat fee but the amount of memory and number of CPU cores vary with each configuration. CPU optimized are meant for running compute-intensive workloads that demand beefy VMs.

If your deployment demands a combination of these droplets, you can create separate node pools for each configuration and add them later. Each node pool can be selectively used with a specific workload.

Finally, give a name to your cluster and push the button.

The same can be launched with the nifty doctl CLI with the below command:

Once the cluster is up and running, download the kubeconfig file and point it to the kubectl tool.

This command puts the configuration file in the ~/.kube/config directory.

Verify the cluster by running a couple of standard kubectl commands.

Deploying and Scaling a Workload

With the cluster in place, let’s go ahead and deploy a MEAN web application. The MongoDB Pod will be backed by a block storage volume for persistence. The stateless web application is deployed as a Replica Set that can be scaled in and scaled out.

DigitalOcean Kubernetes comes with its own storage class for the block storage. We will use that to create a Persistent Volume Control (PVC).

Create the PVC pointing to the storage class.

Launch the MongoDB Pod with the below definition. Notice that the PVC is pointed to mongo-pvc created in the previous step.

Expose the MongoDB Pod through a ClusterIP service.

Now, let’s deploy the web app built with Node.js. The Deployment has three replicas.

To access the web app, we will expose it through an external load balancer.

We now have one Pod of MongoDB and three Pods running the web application exposed through a ClusterIP and LoadBalancer respectively.

The external IP corresponds to a DigitalOcean Load Balancer created dynamically by Kubernetes. You can verify this by accessing the load balancers under the Network section of the control panel.

DigitalOcean has done a great job with the integration of its core infrastructure with Kubernetes. The storage class for block storage and load balancer integration make it extremely simple to deploy and manage both stateful and stateless workloads.

Janakiram MSV’s Webinar series, “Machine Intelligence and Modern Infrastructure (MI2)” offers informative and insightful sessions covering cutting-edge technologies. Sign up for the upcoming MI2 webinar to learn how to use Azure IoT Edge. 

The Cloud Native Computing Foundation, which manages Kubernetes, is a sponsor of The New Stack.

Feature image by Hasan Albari from Pexels.

A newsletter digest of the week’s most important stories & analyses.