Modal Title
Kubernetes

Run Stateful Containerized Workloads with Rancher Kubernetes Engine and Portworx

Mar 22nd, 2019 3:00am by
Featued image for: Run Stateful Containerized Workloads with Rancher Kubernetes Engine and Portworx

Rancher has built an installer, the Rancher Kubernetes Engine (RKE), that simplifies installing Kubernetes clusters in any environment. Based on my personal experience of using a variety of tools and managed services, I found RKE lightweight, fast, and a robust tool to configure Kubernetes clusters. Whether it is a development environment with a couple of nodes or a secure production environment with a highly available control plane and multiple nodes, RKE comes in very handy.

Portworx is a container-native storage platform to run stateful workloads in production Kubernetes clusters. It augments Kubernetes primitives such as Persistent Volumes and StatefulSets with robust, reliable, and highly available storage engine.

In this tutorial, we will explore how to install a 3-node Kubernetes cluster running Portworx storage engine on Amazon Web Services (AWS) through RKE. This cluster infrastructure can be used to run relational databases, NoSQL databases, key/value stores, and other stateful applications.

There are three steps to the installation:

  1. Preparing your AWS account for Kubernetes
  2. Installing Kubernetes with RKE
  3. Installing Portworx in Kubernetes

Let’s get started with the first step of preparing and configuring your AWS account

Step 1: Configure AWS for Rancher Kubernetes Engine

We need to configure an IAM policy that has the right level of permissions to deal with Amazon EC2, EBS, and ELB. This policy will be attached to an instance role that the master and worker nodes will assume. Portworx also requires permissions to create, describe, attach, and detach EBS volumes. We can safely combine these two permissions into one role.

Before we create an IAM role, let’s create a trust policy that is required to attach the policy to a resource.

Create the below file and call it rke-px-trust-policy.json:


Now, create another JSON file called rke-px-policy.json with content shown below.


To keep things simple, we are creating a policy that allows all permissions related to EC2 and ELB. In production environments, you want to tweak this for fine-grained access.

Assuming you have AWS CLI installed and configured, run the commands to create the policy and instance profile.


The last command ends up with the creation of a role that EC2 instances can assume.

The next step is to launch EC2 instances that act as the master and worker nodes of Kubernetes. For this walkthrough, we are using t2.xlarge family and Ubuntu 16.04 LTS AMI. Since we need to enough space to install Portworx, configure the root EBS volume size to 20GB. Configure a security group that allows traffic across all the ports. Again, in production, you need to be more restrictive in your approach.

Make sure you have the SSH key to login into the instances. It is a critical requirement for RKE. It’s a good idea to rename your key to id_rsa and moving it to ~/.ssh. This is the location that RKE looks for the private key.

Before going further, tag all the EC2 instances, the security group, and the VPC with the value, Key = kubernetes.io/cluster/CLUSTERID Value = RKE. This tag tells RKE about all the resources involved in the cluster.

With the core infrastructure in place, we will need to install only one software package in all the instances – Docker. You may want to use Ansible playbooks to automate this process. But make sure that Docker Engine is running in all the machines.

The commands below install the latest version of Docker CE.


We are now ready to launch the Kubernetes cluster with RKE.

Step 2: Installing Kubernetes with RKE

RKE is a nifty CLI tool that runs on your local machine. Download the latest version from the Github releases page.

Rename the binary and add it to the directory included in $PATH.


Next, we need to create a YAML file that looks like an Ansible Inventory file. It contains the list of instances and their roles. Keep the names of the EC2 instances handy before creating this file.

Create a cluster.yml file and populate it with the EC2 instance public DNS names.


The first node with the name, ec2-13-232-134-242.ap-south-1.compute.amazonaws.com, is designated as the master node. It runs the control plane and etcd database. Rest of the nodes act as worker nodes.

If you want to experiment with the configuration, feel free to run rke config –name cluster.yml for an interactive version of the tool which gives a chance to modify many parameters.

With cluster.yml file in place, we are now ready to kick off the cluster installation.


That’s the only command you need to run. Sit back and watch the installation progress. In just a few minutes, the cluster becomes ready. RKE creates a kube_config_cluster.yml file in the current directory that can be used with kubectl.

Point kubectl to the cluster with the below command:


We are all set to explore the cluster.

Step 3: Installing and Configuring Portworx

In the last and final step, we are going to install Portworx as the container-native storage layer.

For a detailed guide on installing Portworx in Kubernetes, refer to one of my previous tutorials.

Using the Portworx specification generator, we can create the YAML artifact to create Kubernetes resources. Simply copy the spec, and submit it to the RKE cluster.


After a few minutes, Portworx should be up and running in our cluster. Verify the DaemonSet in the kube-system namespace.

You are now ready to install and run microservices in the brand new cluster. For a simple walkthrough of deploying a MEAN web app, refer to this tutorial.

Janakiram MSV’s Webinar series, “Machine Intelligence and Modern Infrastructure (MI2)” offers informative and insightful sessions covering cutting-edge technologies. Sign up for the upcoming MI2 webinar for a deep dive on performing blue/green deployments with Istio. 

Portworx is a sponsor of The New Stack.

Feature image via Pixabay.

Group Created with Sketch.
TNS owner Insight Partners is an investor in: The New Stack, Docker, Simply.
THE NEW STACK UPDATE A newsletter digest of the week’s most important stories & analyses.