Containers / Kubernetes / Sponsored / Contributed

Ingress Controllers: The Swiss Army Knife of Kubernetes

4 Jan 2022 6:11am, by

Brian Ehlert
Brian is senior technical product manager at F5 Inc. where he product manages NGINX Ingress Controller. Prior to working at F5, Brian’s roles have included systems engineering, auditing, networking, systems hardening, disaster recovery and a key in resolving complex, deep, and multifaceted systems issues.

An ingress controller probably seems like just another technology widget in the Kubernetes realm. Many people view them as low-value commodities, but in reality they can be a powerful tool in your stack. When deployed and configured properly, ingress controllers can radically simplify operation of Kubernetes clusters while enhancing security and improving performance and resilience.

An ingress controller does this by quietly assuming many of the capabilities that other tools or solutions provide. Because ingress controllers are specifically designed for Kubernetes, they can more easily assume these capabilities — unlike trying to adapt existing technology structures, such as load balancers, API gateways, and application delivery controllers (ADCs), to the weird and wonderful world of Kubernetes. The very versatility of ingress controllers is part of what makes them so Swiss Army knife-ish.

Why You Need An Ingress Controller

Ingress controllers are essential for defining and managing ingress (north-south) traffic in Kubernetes, which is a more complex ingress environment than what you might find in non-Kubernetes apps.

By default, apps running in Kubernetes pods (and containers) are not accessible to external networks and traffic. Pods within Kubernetes can only communicate amongst themselves. Kubernetes does have a built-in configuration object for HTTP (Layer 7) load balancing, called ingress. This object defines how entities outside a Kubernetes cluster can connect to pods labeled with one or more Kubernetes services. When you need to provide external access to your Kubernetes services, you create an ingress resource to define the connectivity rules. This includes the URI path, backing service name, and other information. On its own, however, the ingress resource doesn’t do anything. You must deploy and configure an ingress controller application (using the Kubernetes API) to implement the rules defined in ingress resources.

In other words, you really need to deploy an ingress controller to leverage the existing resource and object structure of Kubernetes. Not doing so means working a lot harder to create more detailed rules using a combination of Service objects and external appliances. The no-ingress controller approach does not scale, is expensive, and requires a lot of engineering time.

How Ingress Controllers Work with (or Replace) Load Balancers

Ingress controllers can work standalone to balance and shape traffic or work with your load balancer to unlock the power of Kubernetes and deliver better app performance.

Reminder: The “LoadBalancer” service is not the same as a dedicated load balancer.

Ingress controllers are sometimes described as a “specialized load balancer” for Kubernetes. So that begs the question: Do you need both a load balancer and an ingress controller? Well, the answer is: It depends. As was discussed in the previous post “Duplication, Not Consolidation: The Path Forward for Apps,” sometimes you need some functionality duplication based on who is using the tool and where it’s being deployed.

For many use cases, especially if you’re going to be scaling Kubernetes or in high-compliance environments, organizations deploy both an ingress controller and load balancer. Though they’re deployed in different places, for different purposes and managed by different teams.

  • Load balancer (or ADC):
    • Managed by: A NetOps (or maybe SecOps) team
    • Deployed: Outside Kubernetes as the only public-facing endpoint of services and apps delivered to users outside your cluster. Used as a more generic appliance designed to facilitate security and deliver higher-level network management.
  • Ingress controller:
    • Managed by: A platform ops or DevOps team
    • Deployed: Inside Kubernetes for fine-grain north-south load balancing capabilities (HTTP2, HTTP/HTTPS, SSL/TLS termination, TCP/UDP, WebSocket, gRPC). Certain aspects of configuration granted to application teams – such as URI or paths – and advanced reverse proxy or API gateway functions.

In this diagram, we have the load balancer handling distribution of the traffic across multiple clusters, while the clusters have ingress controllers to ensure equal distribution to the services.

load balancer

Ingress Controller = Security Tool

Ingress controllers can provide a granular and integrated layer to app security that works for “shift-left” security stances and integrates better with lower-level security tools used by app teams rather than NetOps or global security teams.

Ingress controllers can become a key tool in your security arsenal and help you shift security to the left, better matching the needs of and risks presented by microservices and modern applications. Some of the key security benefits of ingress controllers include:

  • Preventing direct access to pods via poorly configured load balancers
    Ingress controllers act as a second layer of access control just in case a global load balancer configuration drifts to an insecure setting.
  • Enforcing mTLS
    Because ingress controllers are designed to function at the node and pod level, and are a control loop running on top of services, they’re the best location for enforcing encryption behaviors — the closest to the actual app.
  • Anomaly detection and enforcement
    Ingress controllers make it easier to put in place the logical rules to deal with anomalies that can be indications of bad behavior. At the global level, these anomalies can be hard to understand or metric. Although for smaller teams managing microservices, the best place to generate this logic is at the level of DevOps and the service developers themselves; they know what their traffic should look like and what rules to apply.
  • Tighter Integration with WAFs
    For the most part, anyone deploying a production app in Kubernetes needs to use a web application firewall (WAF) to protect the app and the cluster. WAFs can filter out bad traffic and protect exposed apps. That said, like anomaly detection, WAFs configured to protect at the global level of an enterprise environment are like blunt instruments that aren’t well-suited to implementing finer-grained security at the app layer. For this reason, many teams are now running their own WAFs inside Kubernetes at the ingress layer and managed separately from the global WAFs. These app-specific WAFs are much easier to manage, integrate and configure at the ingress controller level, where the team that understands the app can set both ingress/egress and security policies.

Ingress Controller = API Gateway

Ingress controllers incorporate most API gateway capabilities in a Kubernetes-native way that reduces complexity and costs while improving performance.

One of the best reasons to adopt ingress controllers is cost savings and simplicity. Because ingress controllers are a specialized proxy, it has the potential to satisfy many of the same use cases that more traditional proxies — load balancers/reverse proxies or ADCs — can achieve. This includes numerous load balancing and API gateway capabilities, such as:

  • TLS/SSL termination
  • Client authentication
  • Rate limiting, restarts, and timeouts
  • Fine-grained access control
  • Per-request routing at Layers 4 and Layer 7
  • Blue/green and canary deployments
  • Routing for legacy protocols (UDP, TCP)
  • Routing for newer protocols (gRPC)
  • Header and request/response manipulation
  • SNI routing
  • Routing based on advanced service/pod health rules

Note: “API gateway” is often discussed as if it’s a unique product. In fact, it’s a set of use cases that can be accomplished by a proxy. Most often, a load balancer, ADC, or reverse-proxy is implemented as an API gateway. However, at NGINX we’re increasingly seeing ingress controllers and service meshes being used for API gateway functionality.

You won’t necessarily find feature parity between an ingress controller and a tool labeled as an API gateway, and that’s OK. In Kubernetes, you don’t actually need all those extra features, and trying to implement them can get you into trouble. The two most applicable API gateway use cases in Kubernetes are traffic management (protocols, shaping, splitting) and security (authentication, end-to-end encryption). With that in mind, you’ll need an ingress controller to be able to handle the following:

  • Method level routing/matching
  • Authentication/authorization offload
  • Authorization-based routing
  • Protocol compatibility (HTTP, HTTP/2, HTTP/3, WebSockets, gRPC)

Your developers will thank you because an ingress controller lets them define API gateway or load balancer functions in a Kubernetes-native way (declarative/imperative YAML) that easily fits into their workflows. So will your legal and finance teams, with less costs and fewer licenses to track. Lastly, customers and users get better experiences because removing additional control elements from the traffic path invariably results in better performance.

Read “How Do I Choose? API Gateway vs. Ingress Controller vs. Service Mesh” for more on this topic, including sample scenarios for north-south and east-west API traffic.

Ingress Controllers = Observability and Monitoring Powerhouse

Ingress controllers watch all traffic entering and exiting which means ingress controllers have the potential to provide a lightweight, integrated and easy-to-manage monitoring and observability layer.

Because it sits in front of your clusters and controls L4-L7 traffic and legacy or non-HTTP protocol traffic, your ingress controller has a privileged view of the health of your apps and infrastructure. This is powerful and useful. You can easily extend traffic monitoring from your existing data and control plane into observability tools like Prometheus. In fact, most ingress controllers are natively integrated with well-known CNCF monitoring and observability tools like the aforementioned Prometheus and its closely linked friend, Grafana. There are two use cases that you may be able to solve with an ingress controller:

  • Slow apps: If your app is slow — or down! — an ingress controller with live monitoring capabilities can help you pinpoint exactly where the problem lies. Low requests per second could indicate a misconfiguration, while latency with your response times could indicate a problem with your upstream apps.
  • HTTP errors: If your cluster or platform is running out of resources, you can use historical data from your ingress controller to look for trends. This is where a tool like Grafana can be especially helpful for visualizing the data.

“How to Improve Visibility in Kubernetes” covers these use cases in more depth, including a demo of using NGINX tools with Prometheus and Grafana to troubleshoot Kubernetes problems.

For some service meshes, load balancers and other Kubernetes-flavored networking tools, creating monitoring and observability can add load and latency. Also, they are unable to parse traffic at the same level of granularity as an ingress controller. Because ingress controllers don’t require an additional CRD or object to be added to your configuration files and Kubernetes stack, you can avoid the unnecessary complexity and latency. After all, the more CRDs you deploy, the more complicated your Kubernetes life becomes.

Conclusion: Ingress Controllers Do A Lot More Than Control Ingress

Hopefully, by now you understand a bit more about why ingress controllers are the unsung heroes of Kubernetes networking, to the point that not using one is a real mistake. Some caveats are in order:

  • Not all ingress controllers are able to serve the various use cases discussed in this article. The NGINX blog series “A Guide to Choosing an Ingress Controller” can help you identify requirements, avoid risks, future-proof and navigate the complicated ingress controller landscape.
  • If your ingress rules are poorly designed and your pods under-resourced, then an ingress controller can slow down apps. But if you design your rules well, the nominal cost of putting an ingress controller at the edge of your cluster will be a fraction of the improvements you can create in performance.

That said, ingress controllers continue to improve and add capabilities — and in fact, the release of the Gateway API is a great example of the community’s investment in ingress controllers.

A bet on ingress controllers is a bet on the future of Kubernetes. Because building modern apps is all about building loosely coupled services and enabling more independence on the part of developers, the deployment of ingress controllers can accelerate app development and speed iterations. The Swiss Army knife of Kubernetes networking tools is just what the average developer or DevOps team needs to smartly, efficiently and securely move traffic to and from apps.

Featured image provided by NGINX.