Run Stateful Containerized Workloads with Rancher Kubernetes Engine and Portworx

Rancher has built an installer, the Rancher Kubernetes Engine (RKE), that simplifies installing Kubernetes clusters in any environment. Based on my personal experience of using a variety of tools and managed services, I found RKE lightweight, fast, and a robust tool to configure Kubernetes clusters. Whether it is a development environment with a couple of nodes or a secure production environment with a highly available control plane and multiple nodes, RKE comes in very handy.
Portworx is a container-native storage platform to run stateful workloads in production Kubernetes clusters. It augments Kubernetes primitives such as Persistent Volumes and StatefulSets with robust, reliable, and highly available storage engine.
In this tutorial, we will explore how to install a 3-node Kubernetes cluster running Portworx storage engine on Amazon Web Services (AWS) through RKE. This cluster infrastructure can be used to run relational databases, NoSQL databases, key/value stores, and other stateful applications.
There are three steps to the installation:
- Preparing your AWS account for Kubernetes
- Installing Kubernetes with RKE
- Installing Portworx in Kubernetes
Let’s get started with the first step of preparing and configuring your AWS account
Step 1: Configure AWS for Rancher Kubernetes Engine
We need to configure an IAM policy that has the right level of permissions to deal with Amazon EC2, EBS, and ELB. This policy will be attached to an instance role that the master and worker nodes will assume. Portworx also requires permissions to create, describe, attach, and detach EBS volumes. We can safely combine these two permissions into one role.
Before we create an IAM role, let’s create a trust policy that is required to attach the policy to a resource.
Create the below file and call it rke-px-trust-policy.json:
0 1 2 3 4 5 6 7 |
{ "Version": "2012-10-17", "Statement": { "Effect": "Allow", "Principal": {"Service": "ec2.amazonaws.com"}, "Action": "sts:AssumeRole" } } |
Now, create another JSON file called rke-px-policy.json with content shown below.
0 1 2 3 4 5 6 7 8 9 10 11 12 13 |
{ "Version": "2012-10-17", "Statement": [ { "Sid": "VisualEditor0", "Effect": "Allow", "Action": [ "ec2:*", "elasticloadbalancing:*" ], "Resource": "*" } ] } |
To keep things simple, we are creating a policy that allows all permissions related to EC2 and ELB. In production environments, you want to tweak this for fine-grained access.
Assuming you have AWS CLI installed and configured, run the commands to create the policy and instance profile.
0 1 2 3 |
aws iam create-role --role-name rke-px-role --assume-role-policy-document file://rke-px-trust-policy.json aws iam put-role-policy --role-name rke-px-role --policy-name rke-px-access-policy --policy-document file://rke-px-policy.json aws iam create-instance-profile --instance-profile-name rke-px-ec2 aws iam add-role-to-instance-profile --instance-profile-name rke-px-ec2 --role-name rke-px-role |
The last command ends up with the creation of a role that EC2 instances can assume.
The next step is to launch EC2 instances that act as the master and worker nodes of Kubernetes. For this walkthrough, we are using t2.xlarge family and Ubuntu 16.04 LTS AMI. Since we need to enough space to install Portworx, configure the root EBS volume size to 20GB. Configure a security group that allows traffic across all the ports. Again, in production, you need to be more restrictive in your approach.
Make sure you have the SSH key to login into the instances. It is a critical requirement for RKE. It’s a good idea to rename your key to id_rsa and moving it to ~/.ssh. This is the location that RKE looks for the private key.
Before going further, tag all the EC2 instances, the security group, and the VPC with the value, Key = kubernetes.io/cluster/CLUSTERID Value = RKE. This tag tells RKE about all the resources involved in the cluster.
With the core infrastructure in place, we will need to install only one software package in all the instances – Docker. You may want to use Ansible playbooks to automate this process. But make sure that Docker Engine is running in all the machines.
The commands below install the latest version of Docker CE.
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 |
sudo apt-get remove docker docker-engine docker.io containerd runc sudo apt-get update sudo apt-get install -y \ apt-transport-https \ ca-certificates \ curl \ gnupg-agent \ software-properties-common curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add - sudo apt-key fingerprint 0EBFCD88 sudo add-apt-repository \ "deb [arch=amd64] https://download.docker.com/linux/ubuntu \ $(lsb_release -cs) \ stable" sudo apt-get update sudo apt-get install -y docker-ce docker-ce-cli containerd.io sudo usermod -a -G docker ubuntu docker version |
We are now ready to launch the Kubernetes cluster with RKE.
Step 2: Installing Kubernetes with RKE
RKE is a nifty CLI tool that runs on your local machine. Download the latest version from the Github releases page.
Rename the binary and add it to the directory included in $PATH.
0 1 2 3 |
wget https://github.com/rancher/rke/releases/download/v0.1.17/rke_darwin-amd64 mv rke_darwin-amd64 rke chmod +x ./rke mv ./rke /usr/local/bin |
Next, we need to create a YAML file that looks like an Ansible Inventory file. It contains the list of instances and their roles. Keep the names of the EC2 instances handy before creating this file.
Create a cluster.yml file and populate it with the EC2 instance public DNS names.
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 |
--- cloud_provider: name: aws nodes: - address: ec2-13-232-134-242.ap-south-1.compute.amazonaws.com user: ubuntu role: - controlplane - etcd - address: ec2-13-233-94-39.ap-south-1.compute.amazonaws.com user: ubuntu role: - worker - address: ec2-13-126-184-8.ap-south-1.compute.amazonaws.com user: ubuntu role: - worker - address: ec2-13-126-161-198.ap-south-1.compute.amazonaws.com user: ubuntu role: - worker |
The first node with the name, ec2-13-232-134-242.ap-south-1.compute.amazonaws.com, is designated as the master node. It runs the control plane and etcd database. Rest of the nodes act as worker nodes.
If you want to experiment with the configuration, feel free to run rke config –name cluster.yml for an interactive version of the tool which gives a chance to modify many parameters.
With cluster.yml file in place, we are now ready to kick off the cluster installation.
0 |
rke up |
That’s the only command you need to run. Sit back and watch the installation progress. In just a few minutes, the cluster becomes ready. RKE creates a kube_config_cluster.yml file in the current directory that can be used with kubectl.
Point kubectl to the cluster with the below command:
0 |
export KUBECONFIG=$PWD/kube_config_cluster.yml |
We are all set to explore the cluster.
Step 3: Installing and Configuring Portworx
In the last and final step, we are going to install Portworx as the container-native storage layer.
For a detailed guide on installing Portworx in Kubernetes, refer to one of my previous tutorials.
Using the Portworx specification generator, we can create the YAML artifact to create Kubernetes resources. Simply copy the spec, and submit it to the RKE cluster.
0 |
kubectl apply -f 'https://install.portworx.com/?mc=false&kbver=1.13.4&b=true&s=%22type%3Dgp2%2Csize%3D20%22&md=type%3Dgp2%2Csize%3D150&c=px-cluster-64e89cf9-22ab-48e9-9ba0-c47b11c182df&stork=true&lh=true&st=k8s' |
After a few minutes, Portworx should be up and running in our cluster. Verify the DaemonSet in the kube-system namespace.
You are now ready to install and run microservices in the brand new cluster. For a simple walkthrough of deploying a MEAN web app, refer to this tutorial.
Janakiram MSV’s Webinar series, “Machine Intelligence and Modern Infrastructure (MI2)” offers informative and insightful sessions covering cutting-edge technologies. Sign up for the upcoming MI2 webinar for a deep dive on performing blue/green deployments with Istio.
Portworx is a sponsor of The New Stack.
Feature image via Pixabay.