Modal Title
Cloud Native Ecosystem / Kubernetes / Networking

How HAProxy Streamlines Kubernetes Ingress Control

Surprisingly, work is still going on to finalize the Kubernetes Ingress API, which would provide a standard way for third-party load balancers and proxies to interface with Kubernetes. So, picking the correct ingress controller is crucial to ensuring smooth operations.
May 6th, 2020 11:13am by
Featued image for: How HAProxy Streamlines Kubernetes Ingress Control
Feature image via Pixabay.

In 2016, when the digital media arm of the French Métropole Television (M6) streamed the European Football Championship (UEFA Euro) and the French team made it to the final, the infrastructure Ops team grew increasingly nervous as more users streamed in to watch, in increasingly large numbers.

“I remember the fear that the huge event we were experiencing could bring our platform down,” said Vincent Gallissot, M6 Ops Lead, and his nine-person crew, he recalled at HAProxy 2019. They kept watching the Grafana dashboard, searching for potential anomalies.

In the end, however, nothing bad happened. “We ended up drinking beers and eating pizzas,” he said. But Gallissot didn’t want to go through such a stressful experience again, and so started an initiative to move M6 to the cloud.

Like many organizations dealing with surges of traffic, M6 decided on Kubernetes as the platform for a multicloud architecture, to ease the process of easily scaling up and down traffic. And one of the most crucial parts of the Kubernetes setup is routing the incoming traffic to the appropriate services.

Kubernetes itself offers an option to capture the information needed to manage load balancing, with the same type of Kubernetes configuration file used for managing other resources. Kubernetes’ Ingress capabilities, which acts as a Layer 7 load balancer, provides a way to map customer-facing URLs to the back-end services. The user defines the rules in a Kubernetes Definition File called  “Ingress Resources,” which can then be executed by a Kubernetes Ingress Controller.

Surprisingly, work is still going on to finalize the Kubernetes Ingress API, which would provide a standard way for third-party load balancers and proxies to interface with Kubernetes. So, picking the correct ingress controller is crucial to ensuring smooth operations.

Simplicity

When it comes to managing distributed systems, simplicity is a key element, said Baptiste Assmann, HAProxy principal solutions architect, in an interview with The New Stack. Baptiste has been working with Batch, a mobile message and notification company, which has had to scale up its load balancing operations to support the French government during the COVID-19 crisis.

Like M6, Batch was already well-equipped to scale for surges in traffic: Elections, sports and other popular events routinely lead to spikes in traffic. The need for the French government to keep its populace informed about COVID-19 presented the company with the most stressful challenge yet, an estimated 20-fold increase of traffic.

Anticipating such surge of traffic, M6 packaged the service components in Kubernetes pods and managed load balancing through HAProxy Kubernetes Ingress Controller, with Kubernetes scaling up the services to meet heightened demand.

The challenge with running Kubernetes is that because it dynamically moves workloads around, the IP numbers of the services themselves are constantly changing, both for internal “east-west” traffic, and the requests coming in from outside users. So some sort of proxy server is needed to keep tracks of the service pods. Plus as the traffic increases and Kubernetes responds by spawning additional copies of the service, the additional pods will be needed to be managed and sent traffic to, as well. This could easily done by a proxy server, or a load balancer, one that also serves as an API Gateway.

The default Kubernetes Ingress Controller is based on the NGINX web server, though this setup has some limitations,  Baptiste advised. “Our customers deploy the HAProxy Ingress Controller because it has better performance and richer functionality than the default NGINX Ingress Controller,” he said.

The HAProxy Ingress Controller offers rate limiting, IP whitelisting, the ability to add request and response headers, and connection queuing so that backend pods are not overloaded. HAProxy supports many load balancing algorithms — each suited for a particular type of load distribution — including round-robin, least connections, several hash-based algorithms, and random pick.

“Having the ability to offload these functions to the Ingress Controller means that all of your services can instantly make use of them,” Baptiste said. The HAProxy Ingress Controller is a Golang binary that runs alongside the HAProxy container inside each Kubernetes cluster. It watches the objects and modifies the HAProxy configurations accordingly. Every application under HAProxy’s purview gets an annotation, defined in YAML code and stored in that application’s repository.

The Runtime API can add or remove service endpoints (pods) from the HAProxy configuration dynamically, providing the ability to scale Kubernetes services very quickly with zero downtime. HAProxy also offers a rich set of metrics about the traffic flowing into a cluster: statistics for tracking requests rates, response times, active connections, success versus error responses, and the volume of data passing through.

Cloud Migration

Prior to migrating to the cloud, M6’s data center traffic routed through a set of Varnish servers, with NGINX servers forwarding the HTTP, PHP-FPM and NodeJS calls. About 30 microservices in all were run across a VMware ESX-managed cluster.

To move to the cloud, M6 deployed Terraform to control resources in either AWS or Google Cloud and Kubernetes, which itself was managed by KOPS. It also could manage the Fastly Content Delivery Network. In the best infrastructure-as-code tradition, M6’s configuration is kept in a GitHub repository. Each managed service (such as a thumbnail image generator) has its own Terraform file. Jenkins controls the CD pipeline and Docker image testing. Helm provides the instructions to deploy the image inside a Kubernetes cluster. When a new project requires a database or other resources, the developer can simply submit a pull request to the repository.

Running an AWS Elastic Load Balancer for each application proved to be quite expensive. The company actually maxed up the number of ELBs that could be spun up per account. So the company went with HAProxy Ingress Controller instead.

“We keep an eye over the Kubernetes cluster from the Ingress Controller,” he said. “We know everything that runs inside the cluster. We are able with the Prometheus exporter to watch metrics at the node level, at the service level, and at the container level. So, if something fails we know where it fails.

Group Created with Sketch.
TNS owner Insight Partners is an investor in: The New Stack, Docker.
THE NEW STACK UPDATE A newsletter digest of the week’s most important stories & analyses.