Modal Title
Cloud Services / Networking / Service Mesh

AWS App Mesh: Amazon’s Own Service Mesh for Microservices

At the re:Invent conference in December 2018, Amazon has launched AWS App Mesh, a managed service mesh platform for ECS, EKS, and Fargate. AWS App Mesh makes it easy to manage and monitor microservices.
Jan 25th, 2019 3:00am by
Featued image for: AWS App Mesh: Amazon’s Own Service Mesh for Microservices
Feature image by Nils Nedel on Unsplash.

The service mesh technology has become a key component of the microservices architecture. Open source projects such as Envoy, Istio, Linkerd, and Consul that contribute to the service mesh platforms have gained importance in the recent past. Having solved the container orchestration problem through Kubernetes, the cloud native ecosystem is now putting the efforts on efficiency and resiliency of microservices delivered by the service mesh.

After container management, service mesh has become the core building block of microservices infrastructure. Realizing its importance, cloud providers are now offering managed service mesh platforms along with their CaaS offerings. Google was the first to offer Istio as a service to Google Cloud Platform customers. At the re:Invent conference in December 2018, Amazon has launched AWS App Mesh, a managed service mesh platform for ECS, EKS, and Fargate.

What Is AWS App Mesh?

AWS App Mesh makes it easy to manage and monitor microservices. After the services are deployed in AWS services such as EC2, ECS, EKS, or Fargate, App Mesh will let you take control of the communication and network traffic targeting the microservices. Apart from this, App Mesh delivers superior observability through logging, tracing, and monitoring of microservices.

App Mesh has two core components — control plane and data plane. Data plane is deployed within the application while the control plane is hidden from the users which is managed by Amazon.

To integrate microservices with App Mesh, DevOps teams need to include additional containers in the deployment artifact. The service relies on Envoy proxy and an App Mesh-specific routing agent packaged and deployed along with a microservice through the sidecar pattern. These additional containers intercept the traffic and control it based on the policies applied by the App Mesh control plane.

App Mesh control plane, like any other AWS managed service, is exposed through CLI, SDK, and the web-based console. Traffic and routing policies sent to the control plane are used to govern the ingress and egress of participating microservices.


For example, to integrate a Kubernetes application running in EKS with App Mesh, you have to follow three steps:

1) Include Envoy and App Mesh router containers in the deployment definition,
2) Map each deployment to an App Mesh node, and define network routing policies for each node,
3) Submit the map and policies to App Mesh control plane to change the communication flow.

Since the sidecar containers watch the service closely, they can ingest detailed telemetry into the control plane. This telemetry data can then be used to analyze not just the traffic flow but even the health of each service. This information becomes a goldmine to assess the overall performance of an application.

AWS App Mesh service is currently in public preview with the availability limited to a few regions. On general availability, all the features and integrations are expected to become available.

Key Concepts of AWS App Mesh

Since App Mesh can be used with a variety of compute services of AWS, it follows a taxonomy of its own, which is different from Kubernetes or ECS terminology. Let’s take a closer look at the building blocks.

The first step to defining an App Mesh is to create an instance of the mesh that acts as the logical boundary for all microservices belonging to an application. The next step is identifying the virtual nodes.

A virtual node is a pointer to a unit of deployment with a well-known endpoint. In Kubernetes, a virtual node maps to the combination of the deployment and service objects. Even when the deployment is scaled, the service that routes the traffic remains the same. So, a deployment with multiple replicas of pods and an associated ClusterIP-based service map seamlessly to the virtual node.

Since a pod or deployment without an associated service is inaccessible in Kubernetes, it’s easy to think of the virtual node as a logical representation of the deployment and its associated service.

The virtual node definition contains the declaration of backends — a list of service endpoints it may call — along with the hostname used for service discovery, and allowed ports for ingress. It essentially contains the DNS name, ingress, and egress definitions, which will allow the control plane to filter the traffic appropriately.

The above JSON defines a virtual node with the name order-vn in the mesh called shopmesh. It calls two services — product and customer — that are listed as backends. The virtual node is exposed via the endpoint order.default.svc.cluster.local, which is a ClusterIP service in Kubernetes. The backends are also Kubernetes ClusterIP services exposing deployments product and customer.

The Kubernetes deployment definition for order service is altered to include sidecar containers required by App Mesh. It also has the reference to the corresponding mesh (shopmesh) and the virtual node (order-vn).

A virtual node may contain a virtual router to control the inbound traffic. Each service endpoint available within the mesh needs to be wrapped inside the virtual router. In most cases, it is the same as the DNS name registered for the virtual node. In some scenarios, this may be an internal load balancer routing the traffic to private subnets within a VPC.

The virtual router defined above is bound to the order service. It is the same as the DNS pointer to the order-vn virtual node.

Each virtual router will have one or more associated routes, which connects specific virtual nodes with the virtual router. The route may also have conditions that are used to match requests for a virtual router and distribute traffic accordingly to its associated virtual nodes. Think of the router as the rules engine applied at the virtual router level. It can selectively route the traffic to a set of virtual nodes based on pre-defined conditions and rules.

When we want to route 75 percent of the traffic to V1 but only 25 percent traffic to V2, we define the rule shown below. It’s important to note that product-v2-vn is a virtual node pointing to the service endpoint of V2.

The entities of App Mesh — virtual nodes, virtual routers, and routers — are declared in JSON which is submitted to the control plane via the CLI.

Use Cases of AWS App Mesh

There are three key scenarios addressed by App Mesh:

  • Blue/green deployments — App Mesh makes it possible to switch between multiple versions of the same microservice without any downtime. By changing the rule of the router, the traffic can be diverted from blue deployment to green almost instantly.
  • Canary releases — When a new version is deployed, traffic can be selectively routed to two or more virtual nodes. Based on the observability metrics reported by the control plane, the amount of traffic sent to the new version can be gradually increased.
  • Observability — Microservices participating in the same mesh can be monitored closely for latencies, error rates, traces, and debug information. This telemetry data can be streamed to CloudWatch and third-party monitoring services like DataDog. DevOps can get detailed information about the current state of deployment.

In the upcoming article, I will walk you through a step-by-step process for canary deployments of microservices running in Kubernetes through AWS App Mesh. Stay tuned.

Group Created with Sketch.
THE NEW STACK UPDATE A newsletter digest of the week’s most important stories & analyses.