Modal Title
Kubernetes

Tutorial: Apply the Sidecar Pattern to Deploy Redis in Kubernetes

How to leverage the sidecar pattern to package, deploy, and scale two containers as a unit.
Jan 31st, 2020 9:13am by
Featued image for: Tutorial: Apply the Sidecar Pattern to Deploy Redis in Kubernetes
Feature image by Johnson Martin from Pixabay.

In the last Kubernetes tutorial, we explored the concepts of node and pod affinity/anti-affinity to ensure that relevant pods are co-located or evenly distributed in the cluster. In this installment, I will demonstrate how to leverage the sidecar pattern to package, deploy, and scale two containers as a unit.

The way the Pod is designed in Kubernetes makes it an ideal choice for implementing the sidecar pattern by co-locating two containers. A Pod can contain one or more containers packaged as a unit of deployment.

In this tutorial, we will deploy a microservices application built using MySQL, Redis, and Python/Flask as depicted in the below illustration.

MySQL Deployment will use the concept of node affinity to make use of the SSD disk attached to one of the nodes. For a detailed walkthrough of node and pod affinity, refer to the previous tutorial.

We will then create a multicontainer Pod that has the stateless web API and a Redis container that caches the frequently accessed rows.

Deploying the MySQL Pod

Let’s use node affinity rule to target a node within the GKE cluster with an attached SSD disk. Refer to the first step of this tutorial to launch the cluster and attach an SSD disk.



Defining a Multicontainer Pod for Redis and the Web App

Since we want to ensure that every web Pod gets a Redis Pod, we will define a multicontainer Deployment. This gives us the flexibility of scaling both the containers in tandem through the Pod.


Notice that the Deployment Spec has two container images — Web App and Redis.

Let’s apply the spec to schedule three instances of the Pod.


Each Web Pod has two containers which are visible through the ready status of 2/2.

Run kubectl describe command to inspect the pod.


The screenshot shows partial output from the kubectl describe command.

Verifying the Cache

Let’s test if the cache is adding any value to the application’s performance.

Get the IP address of the load balancer with the below command:


Initialize the database and insert test data into MySQL.



When we access the data, it gets retrieved from the database for the first time. The Redis container will cache the data which is used by subsequent requests.


Responses that have a suffix of (c) indicate that the values have been retrieved from the cache instead of the database.

If you are curious about the code, the Python web application is available on GitHub.

Janakiram MSV’s Webinar series, “Machine Intelligence and Modern Infrastructure (MI2)” offers informative and insightful sessions covering cutting-edge technologies. Sign up for the upcoming MI2 webinar at http://mi2.live.

Group Created with Sketch.
THE NEW STACK UPDATE A newsletter digest of the week’s most important stories & analyses.