The Gateway API, formerly known as the Services API and before that Ingress V2, was first discussed in detail — and in-person — at Kubecon 2019 in San Diego. There were already many well-known and well-documented limitations of Ingress and Kubernetes networking APIs. The Gateway API was intended as a redo of these APIs, built on the lessons from Services, Ingress and the service mesh community.
With a group of Ingress and Service controller implementors assembled, we came up with the properties that we wanted to have in our version 2 of the Kubernetes networking APIs:
- Extensibility: We abused our annotations, we admit it. Complex routing rule structs were never meant to be placed in annotations, but what other choice did we have? The Gateway API is designed with flexible conformance, which mandates 100% support for core features. It requires portability for an optional set of extended features and, most importantly, adds many more extension points for unique custom features. This makes portability explicit and doesn’t constrain vendor-specific capabilities.
- API Composability: While it may all boil down to a single proxy configuration, numerous users, on both the app and the infra side, must define different portions of the service networking surface area for their roles. Monolithic Ingress resources simply don’t provide the role-oriented design needed for shared infrastructure. A composable API (more API resources that work together vs a single monolithic resource) also allows a mixing and matching of resources that promotes continued and gradual evolution.
- Expressiveness: The simplicity of Ingress (host/path routing and TLS) made portability easy, but it was also a lowest common denominator that limited Ingress. The Gateway API uplevels the core routing capabilities with traffic splitting, traffic mirroring, HTTP header manipulation and much more. These core and extended capabilities make a much larger set of features truly portable across implementations.
- Portability: This is the one thing we did not want to change. The ubiquitousness of Service LoadBalancer and Ingress implementations are what allowed an ecosystem of networking projects and products to exist; and that simply made users’ lives easier. Above all, the Gateway API aims to keep industry-standard networking semantics portable between implementations.
Over a year later, there are several Gateway controller implementations in progress that users can use. This overwhelming conformity between implementations demonstrated the demand by vendors and users for a service networking refresh.
Hands-on with the Gateway API
To understand how the Gateway API achieves these goals, let’s introduce two of its resources:
- Gateways represent a load balancer or any generic data plane that listens for traffic that it routes. You can have many gateways, or just a single gateway that might be shared between apps.
- Routes are the routing configuration applied to these gateways. These resources are protocol-specific, so there are HTTPRoutes, TCPRoutes, UDPRoutes, and so on. One or more routes can bind to a gateway; and together they define the routing configuration of the underlying data plane represented by the gateway resource.
A gateway+route is somewhat equivalent to a single Ingress resource. Because they are two resources, it gives the ability for the infrastructure team to own the gateway (and attach policy and configuration to it) and for the app owners to own their routes. This allows less direct coordination between these groups and more developer autonomy.
Role-Oriented and Multitenant Design
If this concept is taken further, it also allows many teams to share the same gateway. Gateways provide built-in controls for the way they bind with routes, even across namespace boundaries. This gives admins control of the way apps are exposed to clients. The following diagram shows two different teams in their own respective namespaces using the same load balancer (modeled by a gateway resource).
This arrangement allows app owners to define traffic routing, traffic weighting, redirects or health checks, because these are attributes tied closely to their app. The infrastructure owners may want to define which load balancers the apps can use, which TLS certificates are used or which source IPs are allowed to connect, as these are platform-level attributes independent of the application. The separation of concerns may be different across organizations, and the API model also provides flexibility to match different models of ownership.
Multicluster Networking with Gateway
The extensibility of the Gateway API also enables new use cases that were not possible before. Released last week, the GKE Gateway controller from Google Cloud allows HTTPRoutes to reference services across different clusters. This opens up the world of multicluster networking for things like multicluster high availability or blue-green/multicluster traffic splitting. Google’s Gateway controller is able to do this multicluster load balancing using its global network, making routing decisions before traffic even gets into the cluster.
The Road Ahead
While the Gateway API has already shown promise to unify cluster ingress, there already are proposals for modeling sidecar-based service mesh and TCP/UDP load balancing using gateway and route resources. This would bring a unification of routing APIs, which might lower the barrier to entry for new service mesh users and provide some convergence between L4 and L7 as well.
It is early in the journey for the Gateway API, and there is still plenty of work to do. Thanks to well-defined conformance and a layered API model, the Gateway API already shows a lot of promise and a long road ahead.
Try It out and Get Involved
There are many resources to check out to learn more:
- Check out the user guides to see what use cases can be addressed.
- Learn about the Google Kubernetes Engine Gateway controller on the Google Cloud blog.
- Find more episodes about Gateway API on the Learn Kubernetes with Google video series
- Try out one of the existing Gateway controllers
- Or get involved and help design and influence the future of Kubernetes service networking!
The New Stack is a wholly owned subsidiary of Insight Partners, an investor in the following companies mentioned in this article: Docker.
KubeCon+CloudNativeCon is a sponsor of The New Stack.
Feature image via Pixabay.