TNS
VOXPOP
How has the recent turmoil within the OpenAI offices changed your plans to use GPT in a business process or product in 2024?
Increased uncertainty means we are more likely to evaluate alternative AI chatbots and LLMs.
0%
No change in plans, though we will keep an eye on the situation.
0%
With Sam Altman back in charge, we are more likely to go all-in with GPT and LLMs.
0%
What recent turmoil?
0%
Cloud Services / Networking

Perform Canary Deployments with AWS App Mesh on Amazon EKS

In this tutorial, I will walk you through all the steps required to perform canary deployments on Amazon EKS with AWS App Mesh.
Feb 1st, 2019 9:35am by
Featued image for: Perform Canary Deployments with AWS App Mesh on Amazon EKS

In this tutorial, I will walk you through all the steps required to perform canary deployments on Amazon EKS with AWS App Mesh. In the previous article in this series, I introduced AWS App Mesh, a managed service mesh to control and monitor microservices deployed in Amazon Web Services.

With a canary deployment, a software update to is rolled out to a small subset of users. In this way, new features and other updates can be tested before it goes live for the entire user base. For this tutorial on setting up canary deployments, we are going to deal with an e-commerce application that has three services — order, product, and customer. The order service is exposed as a REST endpoint to the outside world while product and customer microservices are consumed by order.

The goal is to deploy the next version of product microservice as a canary with only a subset of the traffic sent to it. After testing the new service in production with limited traffic, we will gradually increase the number of requests sent to the latest version and terminate instances of the previous version.

All the three microservices and their new versions will be deployed in Amazon EKS. We will onboard these services into App Mesh to apply policies influencing the traffic flow.

The prerequisites are:

  • Basic understanding of Docker & Kubernetes
  • Active subscription of AWS
  • Latest version of the AWS CLI
  • eksctl and kubectl binaries

Start the tutorial by cloning the Github repository.

Building and Pushing Docker Images

For this tutorial, we use an extremely simple REST API built through Python and Flask. These services are packaged as Docker container images. Invoking the REST API endpoint will simply return the metadata that includes the service name and its version.

Feel free to explore the code available in the src directory of the services folder.

If you want to use images stored in your own Docker Hub account, go ahead and push the images after building them. In case you want to try the tutorial without building and pushing the images, you can safely skip this step.

Configuring AWS App Mesh

Before we deploy the microservices-based app in Kubernetes, we need to have the App Mesh configuration in place. This will set up the control plane to deal with the network topology and routing rules.

Create the mesh and the entire topology with the following command. Make sure you are in the Mesh folder.


This command creates the virtual nodes for V1 of order, product, and customer services. The product and customer services are mentioned as the backends for the order virtual node. A virtual router is configured for each virtual node pointing to the DNS name of the service. Each virtual router is associated with a route that drives traffic to the same virtual node.

You can look at the JSON documents in each service directory with the definition of virtual node, virtual router, and routes.

At this point, we have the baseline mesh configuration for the V1 of our application which will be deployed in EKS.

Explore the topology through the AWS CLI.

Launch an Amazon EKS Cluster

For this tutorial, a t2.medium, one node EKS cluster is sufficient. Let’s launch it in US-West-2 region by running the script that invokes eksctl utility.


Wait till the single-node cluster is provisioned and ready for use.

Deploy and Test V1 of the Application

With the App Mesh and EKS cluster in place, we are all set to deploy and test our app.

In case you want to use custom images pushed to your own Docker Hub account, update the Kubernetes artifacts with the image name by running the script shown below. Skip this step if you want to use default container images from my repository.


Go to the Kubernetes folder and execute the below command:


This results in the creation of three deployments and three services. The order service is exposed via an elastic load balancer (ELB).



After a few minutes, you can send a GET request to the order service via ELB.

Notice that all the services of the application are reflecting version 1.0. Now, it’s time for us to perform a canary deployment of product v2 microservice.

Canary Deployment of Product V2

After making sure that you are in the Mesh folder, execute the below command to create a virtual node for product v2.


Now, deploy the Kubernetes pod for V2 which is already mapped to the product v2 virtual node in App Mesh.


This creates a deployment and a new service endpoint for product v2.

Finally, let’s roll out the App Mesh policy to route 25 percent of the overall traffic to product v2.


When you repeatedly access the order service through the ELB, you can see that version of product service is reflecting 2.0.

Run the below command to monitor the output of curl through the watch utility. The parameters to the watch command indicate that the title is hidden, and the interval is set to 1 sec.


Now that we are convinced about V2, it’s time for us to route additional traffic to it. Let’s drive half of the traffic to V2.

Open the file, product_canary.json located at Mesh/V2/product folder. Update the weights to 50 for both the virtual nodes.

Apply the updated policy to App Mesh by executing the deploy_canary_v2.sh script.

When you hit the ELB this time, you will notice that every alternate request is served by product v2 microservice.

Continue to play with the traffic split and see the magic. The best thing about this scenario is the zero downtime to the application while deploying and routing the traffic to new versions.

Finally, clean up the resources by deleting the EKS cluster and App Mesh resources with the below commands:


Hope you found the tutorial and walkthrough useful.

As AWS App Mesh moves towards general availability, we can see additional features and integrations becoming available.

Feature image via Pixabay.

Group Created with Sketch.
TNS owner Insight Partners is an investor in: Docker.
THE NEW STACK UPDATE A newsletter digest of the week’s most important stories & analyses.