Tutorial: Apply the Sidecar Pattern to Deploy Redis in Kubernetes

In the last Kubernetes tutorial, we explored the concepts of node and pod affinity/anti-affinity to ensure that relevant pods are co-located or evenly distributed in the cluster. In this installment, I will demonstrate how to leverage the sidecar pattern to package, deploy, and scale two containers as a unit.
The way the Pod is designed in Kubernetes makes it an ideal choice for implementing the sidecar pattern by co-locating two containers. A Pod can contain one or more containers packaged as a unit of deployment.
In this tutorial, we will deploy a microservices application built using MySQL, Redis, and Python/Flask as depicted in the below illustration.
MySQL Deployment will use the concept of node affinity to make use of the SSD disk attached to one of the nodes. For a detailed walkthrough of node and pod affinity, refer to the previous tutorial.
We will then create a multicontainer Pod that has the stateless web API and a Redis container that caches the frequently accessed rows.
Deploying the MySQL Pod
Let’s use node affinity rule to target a node within the GKE cluster with an attached SSD disk. Refer to the first step of this tutorial to launch the cluster and attach an SSD disk.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 |
apiVersion: v1 kind: Service metadata: name: mysql labels: app: mysql spec: ports: - port: 3306 name: mysql targetPort: 3306 selector: app: mysql --- apiVersion: apps/v1 kind: Deployment metadata: name: mysql spec: selector: matchLabels: app: mysql template: metadata: labels: app: mysql spec: affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: disktype operator: In values: - ssd containers: - image: mysql:5.6 name: mysql env: - name: MYSQL_ROOT_PASSWORD value: "password" ports: - containerPort: 3306 name: mysql volumeMounts: - name: mysql-persistent-storage mountPath: /var/lib/mysql volumes: - name: mysql-persistent-storage hostPath: path: /mnt/data |
1 |
kubectl apply -f db.yaml |
Defining a Multicontainer Pod for Redis and the Web App
Since we want to ensure that every web Pod gets a Redis Pod, we will define a multicontainer Deployment. This gives us the flexibility of scaling both the containers in tandem through the Pod.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 |
apiVersion: v1 kind: Service metadata: name: web labels: app: web spec: ports: - port: 80 name: redis targetPort: 5000 selector: app: web type: LoadBalancer --- apiVersion: apps/v1 kind: Deployment metadata: name: web spec: selector: matchLabels: app: web replicas: 3 template: metadata: labels: app: web spec: containers: - name: redis image: redis ports: - containerPort: 6379 name: redis protocol: TCP - name: web-app image: janakiramm/py-red env: - name: "REDIS_HOST" value: "localhost" |
Notice that the Deployment Spec has two container images — Web App and Redis.
Let’s apply the spec to schedule three instances of the Pod.
1 |
kubectl apply -f web.yaml |
Each Web Pod has two containers which are visible through the ready status of 2/2.
Run kubectl describe command to inspect the pod.
1 |
kubectl describe pod web-f46bc87dc-dzvwh |
The screenshot shows partial output from the kubectl describe command.
Verifying the Cache
Let’s test if the cache is adding any value to the application’s performance.
Get the IP address of the load balancer with the below command:
1 2 |
export HOST_IP=`kubectl get services -l app=web -o jsonpath="{.items[0].status.loadBalancer.ingress[0].ip}"` export HOST_PORT=80 |
Initialize the database and insert test data into MySQL.
1 |
curl http://$HOST_IP:$HOST_PORT/init |
1 2 3 4 |
curl -i -H "Content-Type: application/json" -X POST -d '{"uid": "1", "user":"John Doe"}' http://$HOST_IP:$HOST_PORT/users/add curl -i -H "Content-Type: application/json" -X POST -d '{"uid": "2", "user":"Jane Doe"}' http://$HOST_IP:$HOST_PORT/users/add curl -i -H "Content-Type: application/json" -X POST -d '{"uid": "3", "user":"Bill Collins"}' http://$HOST_IP:$HOST_PORT/users/add curl -i -H "Content-Type: application/json" -X POST -d '{"uid": "4", "user":"Mike Taylor"}' http://$HOST_IP:$HOST_PORT/users/add |
When we access the data, it gets retrieved from the database for the first time. The Redis container will cache the data which is used by subsequent requests.
1 |
curl http://$HOST_IP:$HOST_PORT/users/1 |
Responses that have a suffix of (c) indicate that the values have been retrieved from the cache instead of the database.
If you are curious about the code, the Python web application is available on GitHub.
Janakiram MSV’s Webinar series, “Machine Intelligence and Modern Infrastructure (MI2)” offers informative and insightful sessions covering cutting-edge technologies. Sign up for the upcoming MI2 webinar at http://mi2.live.