HAProxy Kubernetes Ingress Controller Moves Outside the Cluster

The HAProxy Kubernetes Ingress Controller has been around since the release of HAProxy 2.0 in 2019, with nearly 40 minor updates in the year and a half since, and now the project has its first major update with the release of HAProxy Kubernetes Ingress Controller 1.5.
With the introduction of features around different types of authentication, configuration, and the ability to run the controller external to a Kubernetes cluster, the release marks a new release cadence for the software, said HAProxy director of product Daniel Corbett.
“When we first released the Ingress Controller, we were iterating very quickly. It wasn’t always ideal to introduce new features into a minor release — you may not expect that this change happened, or you would not expect during the upgrade process to encounter new features — and so we decided to switch to how the core software works, where minor releases only get bug fixes, and then what we would consider a major release will get the new features,” said Corbett. “It ensures stability of the overall project so that you’re not caught off guard by a new feature when you’re not ready to introduce those kinds of changes in your environment.”
Moving forward, the project will provide major releases quarterly, on the same general schedule as HAProxy itself.
Currently, the default ingress controller for Kubernetes is based on NGINX, while the HAProxy Kubernetes Ingress Controller, as the name would indicate, is based on HAProxy, an open source load balancer and proxy that focuses on speed and high availability.
The HAProxy Ingress Controller offers a richer functionality, and one feature introduced with v1.5 harnesses that with the ability to add annotations to the Kubernetes Ingress, Service or ConfigMap files and apply them with kubectl instead of editing HAProxy’s configuration file by hand. This feature allows HAProxy Ingress Controller users to more easily access HAProxy features that are not made available by default, explained Corbett.
“It’s extremely flexible. It gives you the building blocks to do anything and everything that you want, no matter how small or complex. It’s very difficult to support that in a manageable or reasonable way with an Ingress controller, so we pick and choose annotations and configuration options to expose that we think are most useful,” said Corbett. “Someone may have a very unique environment, and they want to be able to supply something custom that’s not exposed through the Ingress controller, and so now we expose the functionality to provide an HAProxy configuration snippet.”
Another big feature introduced with v1.5 is the ability to run the HAProxy Ingress Controller external to your Kubernetes cluster. While this method of deployment means that the ingress controller won’t scale directly with Kubernetes and instead requires external management of scalability, it further reduces any overhead for users operating in a latency-sensitive environment, as it obviates the need for the external load balancer or proxy. Corbett explained that one benefit of HAProxy Ingress Controller is that it offers high availability by being able to reconfigure without causing downtime.
“They don’t want to introduce an extra proxy layer. They want their load balancer to monitor Kubernetes for changes and make configuration changes in the load balancer based on that. Users will be able to run the Ingress controller outside of Kubernetes, it will be able to monitor through the Kubernetes API for changes, and then reconfigure its local load balancer based on those changes,” said Corbett. “It allows customers to achieve zero downtime through the HAProxy Runtime API. When changes are made within the Kubernetes environment, the Ingress controller is able to reconfigure HAProxy for the most part on the fly for many things such as back end application servers or pods, scaling up or scaling down.”
Finally, v1.5 also adds both basic authentication functionality, offering a gateway between any internal services and attempts at external access, and support for mutual TLS authentication (mTLS) between the ingress controller and the backend servers it routes traffic to.