Kubernetes

Tutorial: Blue/Green Deployments with Kubernetes and Istio

19 Oct 2018 8:34am, by

Istio is a service mesh designed to make communication among microservices reliable, transparent, and secure. Istio intercepts the external and internal traffic targeting the services deployed in container platforms such as Kubernetes.

Though Istio is capable of many things including secure service-to-service communication, automated logging of metrics, enforcing a policy for access controls, rate limits, and quotas, we will focus exclusively on the traffic management features.

Istio lets DevOps teams create rules to intelligently route the traffic to internal services. It is extremely simple to configure service-level properties like circuit breakers, timeouts, and retries, to set up a variety of deployment patterns including blue/green deployments and canary rollouts.

The objective of this tutorial is to help you understand how to configure blue/green deployment of microservices running in Kubernetes with Istio. You don’t need to have any prerequisites to explore this scenario except a basic idea of deploying pods and services in Kubernetes. We will configure everything from Minikube to Istio to the sample application.

There are four steps to this tutorial – Installing Minikube, Installing and verifying Istio, deploying two versions of the same app, and finally configuring the services for blue/green deployments. We will use two simple, pre-built container images that represent blue (V1) and green (V2) releases.

Step 1: Install Minikube

To minimize the dependencies, we will use Minikube as the testbed for our setup. Since we need a custom configuration of Minikube, start by deleting the existing setup and restarting the cluster with additional parameters.

$ minikube start --memory=8192 --cpus=4 --kubernetes-version=v1.10.0 \
--extra-config=controller-manager.cluster-signing-cert-file="/var/lib/localkube/certs/ca.crt" \
--extra-config=controller-manager.cluster-signing-key-file="/var/lib/localkube/certs/ca.key" \
--vm-driver=virtualbox

We need at least 8GB of RAM and 4 core CPU to run Istio on Minikube. Wait for the cluster to start.

Step 2: Install Istio

With Kubernetes up and running, it’s time for us to install Istio. Follow the below steps to configure it.

$ curl -L https://git.io/getLatestIstio | sh -

You will find a folder, istio-1.0.2, in the same directory where you ran the above command. Add the location istio-1.0.2/bin to the PATH variable to make it easy to access Istio binaries.

Since we are running Istio with Minikube, we need to make one change before going ahead with the next step – changing the Ingress Gateway service from type LoadBalancer to NodePort.

Open the file istio-1.0.2/install/kubernetes/istio-demo.yaml, search for LoadBalancer and replace it with NodePort.
Istio comes with many Custom Resource Definitions (CRD) for Kubernetes. They help us manipulate virtual services, rules, gateways, and other Istio-specific objects from kubectl. Let’s install the CRDs before deploying the actual service mesh.

$ kubectl apply -f install/kubernetes/helm/istio/templates/crds.yaml

Finally, let’s install Istio within Kubernetes.

$ kubectl apply -f install/kubernetes/istio-demo.yaml

The above step results in the creation of a new namespace – istio-system – under which multiple objects get deployed.

We will notice multiple services created within the istio-system namespace.

After a few minutes, you will see multiple pods deployed by Istio. Verify this by running kubectl get pods -n=istio-system.

All the pods must be in running or complete mode, which indicates that Istio is successfully installed and configured.

Now, we are ready to deploy and configure services for the blue/green pattern.

Step 3: Deploying two versions of the same application

To represent two different versions of the applications, I have built simple Nginx-based Docker images – janakiramm/myapp:v1 and janakiramm/myapp:v2. When deployed, they show a static page with a blue or green background. We will use these images for the tutorial.

apiVersion: v1
kind: Service
metadata:
  name: myapp
  labels:
    app: myapp
spec:
  type: ClusterIP
  ports:
  - port: 80
    name: http
  selector:
    app: myapp
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: myapp-v1
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: myapp
        version: v1
    spec:
      containers:
      - name: myapp
        image: janakiramm/myapp:v1
        imagePullPolicy: IfNotPresent
        ports:
        - containerPort: 80
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: myapp-v2
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: myapp
        version: v2
    spec:
      containers:
      - name: myapp
        image: janakiramm/myapp:v2
        imagePullPolicy: IfNotPresent
        ports:
        - containerPort: 80

You can access the YAML gist from Github.

Let’s start by creating a YAML file that defines the deployments for V1 and V2 along with a ClusterIP that exposes them. Notice the labels used for identifying the pods – app and version. While the app name remains the same the version is different between the two deployments.

This is expected by Istio to treat them as a single app but to differentiate them based on the version.

Same is the case with the ClusterIP service definition. Due the label, app: myapp, it is associated with the pods from both the deployments based on different versions.

Create the deployment and the service with kubectl. Note that these are simple Kubernetes objects with no knowledge of Istio. The only connection with Istio is the way we created the labels for the deployments and the service.

$ kubectl apply -f myapp.yaml

Before configuring Istio routing, let’s check out the versions of our app. We can port-forward the deployments to access the pods.

To access V1 of the app, run the below command and hit localhost:8080. Hit CTRL+C when you are done.

$ kubectl port-forward deployment/myapp-v1 8080:80

For V2, run the below command and hit localhost:8081. Hit CTRL+C when you are done.

$ kubectl port-forward deployment/myapp-v2 8081:80

Step 4: Configuring Blue/Green Deployments

Our goal is to drive the traffic selectively to one of the deployments with no downtime. To achieve this, we need to tell Istio to route the traffic based on the weights.

There are three objects involved in making this happen:

Gateway
An Istio Gateway describes a load balancer operating at the edge of the mesh receiving incoming or outgoing HTTP/TCP connections. The specification describes a set of ports that should be exposed, the type of protocol to use, SNI configuration for the load balancer, etc. In the below definition, we are pointing the gateway to the default Ingress Gateway created by Istio during the installation.

Let’s create the gateway as a Kubernetes object.

apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
  name: app-gateway
spec:
  selector:
    istio: ingressgateway
  servers:
  - port:
      number: 80
      name: http
      protocol: HTTP
    hosts:
    - "*"

Destination Rule
An Istio DestinationRule defines policies that apply to traffic intended for a service after routing has occurred. Notice how the rule is declared based on the labels defined in the original Kubernetes deployment.

apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
  name: myapp
spec:
  host: myapp
  subsets:
  - name: v1
    labels:
      version: v1
  - name: v2
    labels:
      version: v2

Virtual Service
A VirtualService defines a set of traffic routing rules to apply when a host is addressed. Each routing rule defines matching criteria for traffic of a specific protocol. If the traffic is matched, then it is sent to a named destination service based on a version.

In the below definition, we are declaring the weights as 50 for both v1 and v2, which means the traffic will be evenly distributed.

apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
  name: myapp
spec:
  hosts:
  - "*"
  gateways:
  - app-gateway
  http:
    - route:
      - destination:
          host: myapp
          subset: v1
        weight: 50
      - destination:
          host: myapp
          subset: v2
        weight: 50        

You can define all the above in one YAML file that can be used from kubectl. This YAML file is available as Github Gist.

$ kubectl apply -f app-gateway.yaml 

Now, let’s go ahead and access the service. Since we are using Minikube with NodePort, we need to get the exact port on which the Ingress Gateway is running.

Run the below commands to access the Ingress Host (Minikube) and Ingress port.

$ export INGRESS_HOST=$(minikube ip)

$ export INGRESS_PORT=$(kubectl -n istio-system get service istio-ingressgateway -o jsonpath='{.spec.ports[?(@.name=="http2")].nodePort}')

If you access the URI from the browser, you will see the traffic getting routed evenly between blue and green pages.

We can see the result from a terminal window. Run the below command from the terminal window to see alternating response from V1 and V2.

while : ;do export GREP_COLOR='1;33';curl -s  192.168.99.100:31380 \
 |  grep --color=always "V1" ; export GREP_COLOR='1;36';\
 curl -s  192.168.99.100:31380 \
 | grep --color=always "vNext" ; sleep 1; done

While the above command is running in a loop, let’s go back to the app-gateway.yaml file to adjust the weights. Set the weight of V1 to 0 and V2 to 100.

Submit the new definition to Istio.

$ istioctl replace -f app-gateway.yaml

Immediately after updating the weights, V2 will get 100 percent of the traffic. This is visible from the output of the first terminal window.

You can continue to adjust the weights and watch the traffic getting rerouted dynamically without incurring any downtime.

Traffic management is only one of the features of Istio. In the upcoming articles, we will explore other capabilities of Istio.

Feature image via Pixabay.


A digest of the week’s most important stories & analyses.

View / Add Comments

Please stay on topic and be respectful of others. Review our Terms of Use.