What news from AWS re:Invent last week will have the most impact on you?
Amazon Q, an AI chatbot for explaining how AWS works.
Super-fast S3 Express storage.
New Graviton 4 processor instances.
Emily Freeman leaving AWS.
I don't use AWS, so none of this will affect me.
API Management / Kubernetes

Best Practices for API Security in Kubernetes

A look at the role of the ingress controller in Kubernetes as part of implementing best practices for securing APIs.
Aug 2nd, 2022 7:00am by
Featued image for: Best Practices for API Security in Kubernetes
Feature image via Nappy.

When deploying containerized applications with Kubernetes, it’s essential to consider the infrastructure that the application depends on.

Judith Kahrer
Judith is a product marketing engineer with a keen interest in security and identity. She started her working life as a developer and moved on to being a security engineer and consultant before joining the Curity team.

When securing an application or API in Kubernetes, make sure to consider the 4Cs of cloud native security:

  • Cloud
  • Cluster
  • Container
  • Code

Following the best practices for each of these points ensures that the application runs in a secure environment. Such an environment has controls and components to monitor and avoid unexpected behavior on various levels, including the Kubernetes ecosystem and its management.

This article focuses on application security. In particular, it discusses the role of the ingress controller in Kubernetes as part of implementing best practices for securing APIs.

Provide a Single Point of Entry

By default, the ports of a pod are not exposed outside the Kubernetes cluster. Pods can be grouped via services that can be configured to expose the ports on the internet. However, the service resource is a simple component. Even when combined with a load balancer, the setup lacks flexibility and adaptability. On the other hand, the ingress controller provides many features needed for enterprise use cases, such as name-based virtual hosting for services, path mapping, proxying, response caching or, most importantly, security features like authentication or TLS termination.

The ingress controller implements the rules for the single point of entry to the cluster, the ingress. As part of that implementation, the ingress controller commonly offers a range of security features. The NGINX Ingress Controller, for example, supports SSL redirection, HTTP Strict Transport Security (HSTS), or filtering of HTTP headers. Depending on the choice of ingress controller, it may even provide capabilities of an API gateway, like rate limitations, aggregation, monitoring or a developer portal. Some ingress controllers, such as the Kong Ingress Controller or the Tyk Operator, leverage API gateway capabilities in Kubernetes by closely integrating with an API gateway. Also, the three major cloud providers offer cloud native Kubernetes ingress controllers — namely the AWS Load Balancer Controller, the Application Gateway Ingress Controller in Azure and GKE Ingress Controller.

When exposing an API in Kubernetes, use an ingress controller as a gatekeeper to protect all services in the cluster. Choose the features according to your requirements and the following best practices.

Restrict Access

As the single entry point, the ingress controller is the perfect place to enforce security policies such as authentication and authorization. This is a common requirement, and ingress controllers often provide support for authentication via open standards such as OAuth 2.0 and OpenID Connect. Choosing an open standard over proprietary solutions is a good practice as it enables interoperability and portability. When using OAuth 2.0 or OpenID Connect, the ingress controller can validate the tokens at the perimeter. It can also perform coarse-grained access control based on the data in the token, such as the issuer, scope and audience parameter, and thus offload the API.

In a microservice architecture, one service may call another to fulfill a request. Actually, there may be a full chain of calls before a response is returned. However, not all services in the chain require the same permissions. Therefore, make sure that the tokens have sufficient rights. Consider token-sharing mechanisms to avoid overloading a token to contain all possible permissions (scopes and claims) that may or may not be required in downstream service calls.

For example, have the OAuth 2.0 or OpenID Connect server issue tokens that contain another embedded token that can be used in downstream service calls. This means, however, that the token server must know beforehand which tokens to generate and embed. To be able to do so, it must know in advance which other services will be called. In a complex setup, this requirement can be hard to maintain. An alternative and more flexible approach is token exchange, where an existing token can be exchanged for a new one. The protocol is similar to the Phantom Token approach, but instead of changing the format, it changes part of the content.

Whatever the approach, forwarding tailored tokens is part of implementing the principle of least privilege. The principle of least privilege reduces the overall attack surface as it gets more challenging to exploit privileges and gain access to services and data that should not be granted.

In a mature setup, where an ingress controller or API gateway is used to orchestrate different microservices, it will also be responsible for performing the token exchange.

Trust No One

It is best practice to not stop at the perimeter but implement a zero trust architecture. Make sure that all service requests within your cluster are authenticated. A service mesh provides such an approach at the infrastructure level. OAuth 2.0 and OpenID Connect, in addition, provide the tools for application-level security. Design the access tokens to be used for fine-grained authorization in the API.

JSON Web Tokens (JWTs) are a popular format for access tokens. They are useful as they enable a zero trust architecture and self-contained tokens. However, such tokens can contain sensitive data that a malicious actor can easily parse, if unencrypted. Therefore, JWTs should only be used inside the cluster. This is called the Phantom Token approach.

Choose an extensible ingress controller to allow for customization and implementation of rules such as those implied by the Phantom Token approach or token exchange. It should support scripting capabilities or plugins. The latter is easier because you can simply add and configure a plugin without writing code. In particular, you can rely on the work of security experts for security-related plugins. For example, use provided plugins for NGINX or Kong to add support for the Phantom Token approach. But even scripts and configurations can be shared and used to distribute security best practices of a zero trust architecture.

Best Practices at a Glance

In short, when securing an API in Kubernetes, consider the following:

  • Use an ingress controller (or an API gateway) to protect all services of an API in Kubernetes.
  • Perform coarse-grained authorization at the perimeter and leave the fine-grained decisions to the API.
  • Design tokens carefully and exchange them if required to fulfill the principle of least privilege.
  • Rely on standard protocols and use extensibility features to implement a zero trust architecture.
Group Created with Sketch.
TNS owner Insight Partners is an investor in: Pragma.
THE NEW STACK UPDATE A newsletter digest of the week’s most important stories & analyses.