Modal Title
Security / Service Mesh

How to Handle Authorization in a Service Mesh

There are three approaches to sharing tokens between services in the service mesh. They can be combined to achieve for security that suits business needs.
Jul 8th, 2022 12:29pm by
Featued image for: How to Handle Authorization in a Service Mesh
Feature image via Pixabay

Microservices architecture results in an ecosystem where small pieces of an application collaborate to fulfill a business case.

Judith Kahrer
Judith is a product marketing engineer with a keen interest in security and identity. She started her working life as a developer and moved on to being a security engineer and consultant before joining the Curity team.

The big benefit of using microservices is that developers can change, update and deploy the parts of an application independently. Teams can work in parallel and, ideally, do not have to wait for other groups to release. This flexibility increases productivity and decreases the time to market for new features.

However, even though services are highly independent in a microservice architecture, they must still communicate to work together. Yet, service communication becomes a challenge with the growing number of parties. Routing requests between multiple services and versions while implementing security requirements like authorization, authentication or encryption gets complex.

The Infrastructure Layer

A service mesh addresses the challenges of service communication in a large-scale application. It adds an infrastructure layer that handles service discovery, load balancing and secure communication for the microservices. Commonly, a service mesh complements each microservice with an extra component — a proxy often referred to as a sidecar or data plane.

The proxy intercepts all traffic from and to its accompanied service. It typically uses MutualTLS, an encrypted connection with client authentication, to communicate with other proxies in the service mesh. This way, all traffic between the services is encrypted and authenticated without updating the application. Only services that are part of the service mesh can participate in the communication, which is a security improvement. In addition, the service mesh management features allow you to configure the proxy and enforce policies such as allowing or denying particular connections, further improving security.

The Application Layer

To implement a Zero Trust architecture, you must consider several layers of security. The application should not blindly trust a request even when receiving it over the encrypted wire. It must validate that requests are legitimate and ensure that data access is secure at the application level. A well-established protocol to achieve authentication and authorization on the application level is OAuth 2.0.

A component like an ingress controller, an API gateway or a reverse proxy is ideal for enforcing authentication and performing basic authorization by validating the OAuth access token before the request reaches the service network. However, it is best practice to not only authenticate and authorize each request at the perimeter but also inside the network and between the services. As the authorization decision is part of the business logic, token validation is also part of the application.

When validating tokens, it’s recommended to follow best practices such as:

  • Only accepting tokens that are valid in terms of the issuer and expiration.
  • Ensuring tokens are used as intended concerning the audience, scope and claims.

Tokens in and Between Services

The introduction of authorization on the application level in a service mesh comes with a challenge. As outlined above, services collaborate to fulfill a business case. But when all services in the service mesh require authorization, service requests must contain authorization data. In other words, services need to be able to share tokens with other services.

Forwarding Tokens

The intuitive way is to simply forward the incoming token to downstream services. This approach works fine as long as all the services require the same privileges and the token stays within one security domain. While reusing a token is a simple solution, it can be problematic when a token’s privileges, that is, the scopes and claims, grow to fulfill all the requirements of subsequent requests. A token with many privileges is eventually a target for exploits and breaks the principle of least privilege.

Privacy is another concern when forwarding tokens, particularly when forwarding tokens over security boundaries like external services. Tokens often carry user or business data. When reusing the token for several services, each has certain requirements that affect the data in the token. For example, one service may require the user’s bank account number and another the user’s home address, which may be irrelevant to the first service. Sending irrelevant data, some of which may be sensitive regarding data privacy laws introduces an unnecessary security risk.

Tailored Tokens

Follow the principle of least privilege and ensure that tokens contain just enough data for a service to fulfill its task. If the chain of subsequent service requests required to complete the business case is known beforehand, then embedded tokens are an appropriate approach for leveraging tailored tokens. In this approach, the parent token carries an embedded token for each subsequent request to other services. The API gateway or service can extract the embedded token before sending a request to the next service. Following, a service receives a tailored token with the corresponding scopes and claims that the service needs to fulfill its part in the business case.

The embedded token approach demands a particular standardized token service and infrastructure. First of all, the token service must be able to embed tokens. It must have the dependency tree information for a request to know which tokens to embed for the incoming token request. It most likely will base the decision on the client-id and scope. When the dependency tree is complicated, and when there are several levels of embedded tokens, the task of evaluating which tokens to include becomes cumbersome. As the parent token contains embedded tokens, it is bigger than a single token.

Consequently, the size of the request becomes bigger and thus requires a network infrastructure that can handle large HTTP headers. Not only must the infrastructure be able to transport potentially large messages, but also cache the tokens for further processing. This can be especially challenging when there are high peaks in the load.

At times, the final service chain may not be known in advance. This is common in a world of loosely coupled services, where the system frequently changes as part of agile paradigms or where processes are conditional. Consequently, embedding the required tokens for subsequent service requests may not be possible. In this case, the embedded token approach will not work, and a more dynamic approach is needed.

Tokens on Demand

The most flexible approach presented in this article is token exchange. It can be used to exchange an existing token for a new one. The new token can be narrowed in scope and claims to be a subset of the original token. Or, it could be exchanged for a completely new one to fit a different security domain. The token exchange protocol for OAuth 2.0 is standardized in RFC8693.

Whenever a service aims to call a downstream service for further processing and whenever the original token does not fulfill the requirements for subsequent service requests, then the originally called service can exchange its token for a new tailored token. It simply sends a request to the token service, asking for a new token. The token service then checks its ruleset, and if the request is valid, it issues and returns a new token that the calling service uses in its request to the downstream service.

There are two crucial aspects of the token exchange process. First, it helps implement the principle of least privilege by issuing tailored tokens with narrow scopes and claims designed for a particular service or set of services. Second, it enables flexibility by issuing tokens on demand when they are actually needed. The approach fits very well with the microservices paradigm, where the services are loosely coupled and combined in various ways. There are basically no limitations for the new token. With token exchange, you can safely cross security boundaries and implement various use cases, including impersonation.

However, since the protocol requires additional requests to get the new token, the approach implies challenges in systems where latency is crucial. Also, the token service will still have to maintain the ruleset for the conditions on when to issue which new token. This ruleset is based on business rules and must consider security concerns to avoid privilege escalation.


Security is a multidisciplinary field and needs to be implemented at various layers. Service mesh improves the security of an application built upon microservices by adding an infrastructure layer where connected services can communicate securely. OAuth 2.0 adds security to the application layer and allows the secure implementation of business rules across the different microservices. There are three approaches to sharing tokens between services in the service mesh: forward, embed and on-demand. The approaches are not mutually exclusive but can be combined to achieve a secure implementation that suits the business and its prerequisites.

Group Created with Sketch.
THE NEW STACK UPDATE A newsletter digest of the week’s most important stories & analyses.