HAProxy sponsored this post.
The Kubernetes Ingress API is closer to shedding its beta label than it has ever been, say engineers working on the project. That might sound strange, considering that many companies already use it to expose their Kubernetes services, despite its beta status. Then again, it’s been a long beta — years in fact — having entered that phase during the Fall of 2015. However, that’s given the Kubernetes contributors the time they’ve needed to refine the specification and align it closer with what implementers (HAProxy, NGINX, Traefik, et al) had been building already, formalizing the API to reflect the most common and requested features.
With GA around the corner, it feels like the right time to help newcomers get up to speed on how Ingress works. As a short definition, an Ingress is a rule that charts how a service, walled inside the cluster, can bridge the divide to the outside world where clients can use it. At the same time, a proxy, which is called an Ingress Controller, listens at the edge of the cluster’s network — watching for those rules to be added — and maps each service to a particular URL path or domain name for public consumption. While the Kubernetes maintainers develop the API, other open source projects implement the Ingress Controllers and add their own features unique to their proxy.
In this post, I’ll put the concepts into context and help you understand the driving forces behind the Ingress pattern.
The Routing Problem
When you create pods in Kubernetes, you assign selector labels to them, as shown in this snippet of a Deployment manifest:
This Deployment creates three replicas that run the Docker image my-app and assigns the app=foo label to them. Rather than accessing the pods directly, it’s typical to group them under a Service, which makes them available at a single cluster IP address, although only from within the same cluster. The Service acts as a layer of abstraction that hides the ephemeral nature of the pods, which can be scaled up and down or replaced at any time. It performs rudimentary, round-robin load balancing.
For example, the following Service definition collects all pods that have a selector label app=foo and routes traffic evenly among them.
However, this service is accessible from inside the cluster only, by other pods running nearby. Kubernetes operators grappled with how to give clients outside the cluster access. The problem was apparent early on and two mechanisms were integrated directly into the Service specification to deal with it. When writing the service manifest, you can include a field named type, which takes a value of either NodePort or LoadBalancer. Here’s an example that sets type to NodePort:
Services with a NodePort type are easy. They essentially announce that they’d like the Kubernetes API to assign to them a random TCP port and expose it outside the cluster. What makes this convenient is that a client can target any node in the cluster using that port and their messages will be relayed to the right place. It’s like saying you can call any phone in the United States and whoever picks up will make sure you get forwarded to the right person.
The downside is that the port’s value must fall between 30000 and 32767, a range safely out of the way of well-known ports, but also conspicuously non-standard compared to the familiar ports 80 for HTTP and 443 for HTTPS. The randomness itself is also a hurdle, since it means that you don’t know what the value will be beforehand, which makes configuring NAT, firewall rules, etc. just a bit more challenging — especially when a different, random port is set for every service.
The other option is to set type to LoadBalancer. However, this comes with some prerequisites. It only works if you are operating in a cloud-hosted environment like Google’s GKE or Amazon’s EKS and if you are okay with using that cloud vendor’s load balancer technology, since it is chosen and configured automatically. The most costly disadvantage is that a hosted load balancer is spun up for every service with this type, along with a new public IP address, which has additional costs.
Allocating a random port or external load balancer is easy to set in motion, but comes with unique challenges. Defining many NodePort services creates a tangle of random ports. Defining many Load Balancer services leads to paying for more cloud resources than desired. It’s not possible to avoid completely, but perhaps it could be reduced, contained, so that you would only need to allocate one random port or one load balancer to expose many internal services? The platform needed a new layer of abstraction, one that could consolidate many services behind a single entrypoint.
It was then that the Kubernetes API introduced a new type of manifest, called an Ingress, which offered a fresh take on the routing problem. It works like this: you write an Ingress manifest that declares how you would like clients to be routed to a service. The manifest doesn’t actually do anything on its own; you must deploy an Ingress Controller into your cluster to watch for these declarations and act upon them.
Ingress controllers are pods, just like any other application, so they’re part of the cluster and can see other pods. They’re built using reverse proxies that have been active in the market for years. So, you have your choice of an HAProxy Ingress Controller, an NGINX Ingress Controller, and so on. The underlying proxy gives it Layer 7 routing and load balancing capabilities. Different proxies bring their own set of features to the table. For example, the HAProxy Ingress Controller doesn’t need to reload itself as often as the NGINX Ingress Controller, because it allocates slots for servers and fills them in at runtime using its Runtime API. That can lead to better performance.
Being inside the cluster themselves, Ingress Controllers are susceptible to the same walled-in jail as other Kubernetes pods. You need to expose them to the outside via a Service with a type of either NodePort or LoadBalancer. However, now you have a single entrypoint that all traffic goes through: one Service connected to one Ingress Controller, which, in turn, is connected to many internal pods. The controller, having the ability to inspect HTTP requests, directs a client to the correct pod based on characteristics it finds, such as the URL path or the domain name.
Consider this example of an Ingress, which defines how the URL path /foo should connect to a backend service named foo-service, while the URL path /bar is directed to a service name bar-service.
You still need to set up services for your pods, as shown before, but you do not need to set a type field on them, because routing and load balancing will be handled by the Ingress layer. The role of the Service is reduced to its ability to group pods under a common name. Ultimately, the two paths, /foo and /bar, are served by a common IP address and domain name, such as example.com/foo and example.com/bar. This is essentially the API Gateway pattern. In an API Gateway, a single address routes requests to multiple backend applications.
Adding an Ingress Controller
The declarative approach of Ingress manifests lets you specify what you want without needing to know how it will be fulfilled. Fulfillment is the job of an Ingress Controller, which watches for new Ingress rules and configures its underlying proxy to enact the corresponding routes.
You can install the HAProxy Ingress Controller using Helm, the Kubernetes package manager. First, install Helm by downloading the Helm binary and copying it to a folder included in your PATH environment variable, such as /usr/local/bin/. Next, add the HAProxy Technologies Helm repository and deploy the Ingress Controller using the helm install command.
Verify that the Ingress Controller was created by using kubectl get service to list all running services:
The HAProxy Ingress Controller runs inside a pod in your cluster and uses a Service resource of type NodePort to publish access to external clients. In the output shown above, you can see that port 31704 was chosen for HTTP and port 32255 for HTTPS. You can also view the HAProxy statistics page at port 30347. The HAProxy Ingress Controller offers verbose metrics about the traffic flowing through it, which you won’t find in many of the other controllers, so it’s a good one to use to get better observability over traffic entering your cluster.
While the controller creates a Service with its type set to NodePort, which means allocating a random, high-number port, you’re now down to managing only a few such ports — the ones connected to the Ingress Controller, rather than one for every service. You can also configure it to use a LoadBalancer type, as long as you’re operating in the cloud. It would look like this:
Overall, there isn’t much to managing an Ingress Controller. Once installed, it basically does its job in the background. You only need to define Ingress manifests and the controller will wire them up instantly. Ingress manifests are defined apart from the service they refer to, putting you in control of when to expose a service to the public.
Ingress resources have consolidated how services inside a Kubernetes cluster can be accessed by external clients, by allowing an API Gateway style of traffic routing. Proxied services are relayed through a common entrypoint and you control when and how to publish a service by using an intent-driven, YAML declaration.
With the approach of a GA release of the Ingress API, you’re sure to see this pattern become even more popular. There will probably be subtle changes, mostly to align the API with the features that are already implemented in the existing controllers. Other refinements will likely guide how the controllers continue to evolve to match the vision of the Kubernetes maintainers. All in all, it’s a great time to start using this feature!
The New Stack is a wholly owned subsidiary of Insight Partners, an investor in the following companies mentioned in this article: Docker.
Feature image from Pixabay.