HAProxy sponsored this post.
Whenever a new technology gains traction — whether it’s a framework, language, tool, or practice — software developers must distill its true value from the buzz. Many have honed this skill to a fine edge, ready to cut away the meaningless from the meaningful. And with no shortage of new tech, this type of mental weed whacking has become an essential part of the job.
The term API gateway is catching on now. However, for even the most experienced judge of trends, it’s not easy to tell exactly why and how it should be used — especially given the horde of competing products that have flooded the market. Here, I present five tips that will put API gateways into perspective, help you to understand this technology better and ultimately guide you towards success in delivering highly available, secure and observable APIs.
Tip 1: An API Gateway Is a Design Pattern
Of course, an API gateway is a product in the sense that vendors have products with the term API Gateway stuffed into their names. However, it’s best to ignore the marketing buzz around the term and instead think of it as a design pattern, or a common solution to a well-known problem. As a design pattern it can be expressed succinctly:
An API gateway consolidates many APIs behind a single endpoint, while providing additional capabilities like SSL termination, load balancing, token-based authorization, retry logic, rate limiting, and monitoring.
An API gateway tries to solve the inherent complexity of calling many backend APIs, as many modern websites are prone to do, by presenting a unified interface that condenses multiple APIs into one. Rather than connecting to each API directly, the frontend code only needs to know the location of the API gateway. That makes frontend code more resilient to change, allowing you to scale API servers (or containers) up or down without affecting clients, change the layout of your internal network, and roll out updates more safely.
Because an API gateway fronts all of your services, it’s often augmented with features that cut across services — like SSL termination, load balancing, retry logic, rate limiting, and monitoring. The interesting thing is that you will find nearly all of these characteristics within a modern, software load balancer. In fact, there’s nothing wrong with using a load balancer as an API gateway. For instance, you could easily deploy the open source HAProxy load balancer between your clients and services, which has all of these capabilities and can relay API requests to the correct service based on the path in the URL.
Tip 2: Plan for Secure High Availability
Let’s say that you deploy an API gateway and position your servers on a network behind it. If you run only a single instance of it, you risk creating a single point of failure that could take down all of your services. Plan ahead and run at least two instances. Then, you can mirror the same configuration between them, by using a configuration management tool like Ansible or by running file sync software like rsync.
If you’re using HAProxy, you can enable health checks to continuously ping the backend servers and make sure that they’re accessible. Unhealthy servers will be removed automatically and reintroduced after they’re healthy again. You should also enable retries, which will reroute a failed request to another server in the case of a transient network issue.
Also consider setting up rate limiting to throttle connections or number of requests per client. That protects APIs from overuse — including intentional denial-of-service attacks — while keeping the service available for other clients. Frontend code can be designed to cope with rate limiting, dialing back requests if needed and retrying after a short period of time. The HAProxy load balancer also supports queueing connections when a backend server has reached a defined connection limit. The key is to think in terms of protecting the uptime of the service as a whole, even at the cost of some clients.
Tip 3: Avoid Putting Business Logic into Your Gateway
Be cautious about moving business logic into your API gateway. For example, you will find that some vendors include features like API aggregation in their products, which means that when an API call enters the gateway, it is split into calls to multiple backend services. Those services return their results to the gateway, which then combines them into one big result and returns it to the client. The reasoning behind this approach is to reduce latency for the client by making it possible to invoke one API function instead of many, at least from the client’s point of view.
However, the issue with this is that you would be introducing business logic into your API gateway: the logic of a business process workflow. For example, if a customer buys an item from an e-commerce website, there will be several backend systems that need to be updated: inventory, shipping, billing, customer notifications. The sequence in which these steps happen may be important. Whatever the workflow, it meets the definition of business logic because it encodes the business rules of a process. When you move business logic outside of your service code and into external components, such as an API gateway, it typically becomes more difficult to see, harder to test, and in danger of becoming neglected.
The second issue is that by merging all of the results into one, you are defeating the modern tenets of web development — chief among them being the desire to split the UI and service logic into independent components that can be versioned, tested, and delivered apart from one another. Vue.js, React and Angular all elevate components to a higher degree of prominence; Microservices elevate components on the server-side. By returning a single, large response, components lose their independence and become tightly coupled, forcing development teams to coordinate with one another on data formats and naming schemes, which slows the pace of delivery.
There are other patterns for managing a workflow across a distributed system, such as the Saga pattern, which is better for propagating changes in the correct order and allowing the system to roll back in case of an error. Or, if you desire aggregation in order to gradually replace an obsolete service with a new one, you can use the Strangler pattern. However, patterns like these can be implemented in code, keeping them testable and maintainable.
Tip 4: Monitor Your APIs
Monitoring your APIs with logs and metrics can be even more important than with traditional applications. That’s because you need to keep tabs on who is consuming them, and to what degree. How do you know which clients are using a service? When can you finally retire a deprecated API? Which services receive the most traffic and is their volume cyclical or bursty? Try to find a solution that gives you the level of logging and metrics that give you this information.
If you decide to use an HAProxy load balancer for this purpose, it comes with logs that capture nearly every aspect of each request and response, and it has built-in support for Prometheus metrics. The extra detail you get from this layer can pay dividends later, especially as services age and must be retired. An API gateway can track information only available from that point in the network, such as the global number of HTTP requests or errors, authorization failures, number of retries, etc. By instrumenting your API gateway upfront, you’ll have all the information you need to manage the lifetime of your services.
Tip 5: Abstract Away the Details
When possible, you should strive to give development teams autonomy by abstracting away the details of working with the API gateway directly; A dev team should be able to register a new service with minimal help from the Ops team. There are several ways to achieve this. For example, you could enable service discovery — for example by utilizing a tool like Consul — which can make new services discoverable by other software running on the network. Then you can use DNS Service Discovery or consul-template to generate your API gateway configuration from the service metadata registered with Consul.
In some cases, your API gateway may have an API itself, which you can use to configure it dynamically. For example, HAProxy offers its Data Plane API, which you can use to programmatically add a frontend route and a pool of backend servers. This can be integrated into a CI/CD deployment. Ultimately, your goal is to create a layer of abstraction between the platform and the developers, so that they can register their services themselves. The power to build and deliver independently will accelerate their work, which can lead to happier, more productive teams.
An API gateway is a design pattern for consolidating APIs behind a single endpoint. It simplifies how frontend logic communicates with backend services, while providing cross-cutting functionality like SSL termination, load balancing and authorization. There are several ways to ensure success with this pattern:
- Plan ahead and deploy a gateway that factors in high availability.
- Enable health checking on backend servers; consider adding rate-limiting or connection queuing to protect against overuse.
- Be cautious about putting business logic into the gateway layer; prefer techniques that keep workflow decisions in the code itself.
- Make the most out of the logging and metrics that come from this layer; and look for ways to abstract away the details of configuring the gateway, in order to accelerate delivery of new services.
Feature image via Pixabay.
At this time, The New Stack does not allow comments directly on this website. We invite all readers who wish to discuss a story to visit us on Twitter or Facebook. We also welcome your news tips and feedback via email: [email protected].
The New Stack is a wholly owned subsidiary of Insight Partners, an investor in the following companies mentioned in this article: Enable.