From certain angles, the things that have made API management successful have caused its reach to extend its initial grasp — and that means the API management industry has had to rethink itself.
The demands on API management today, in a cloud native landscape, are different than they were a few years ago, when API management was chiefly concerned with a small number of APIs in a corporate data center and behind an enterprise firewall. Simply put, a world of hybrid architectures requires hybrid API management.
This is a bigger statement than it might appear. In today’s IT landscape — already overstuffed with hype about hybrid cloud, hybrid IT, and “hybrid this” or “hybrid that” — it may be tempting to dismiss concepts such as hybrid API management as just another entry in the buzzword parade. But make no mistake: we on Google Cloud’s Apigee team do not believe that hybrid architectures can be safely or efficiently created without hybrid API management. After all, the cloud native world means businesses rely on software in many locations — and because APIs are how software talks to other software, enterprises must have visibility into and control over their APIs and how they are used.
So what specifically is changing?
API management was initially concerned with a subset of an enterprise’s APIs — typically, the ones that expressed the most valuable data or business functionality and that partners and developers would find most useful for building their applications. Early API management solutions served as a centralized access point for this set of APIs, generally allowing external requests to interface with internal systems, and were designed specifically for the APIs to which enterprises wanted to manage access.
This approach was fine for a subset of APIs running in the same place but preceded the rise of cloud computing. Now, many businesses have APIs in multiple clouds or distributed through increasingly heterogeneous architectures. Moreover, many enterprise leaders, having seen the benefits of managing their most valuable APIs, increasingly want to handle all APIs the same way, via self-service tools that offer consistent visibility, security, and analytics capabilities.
Handling all APIs the same way means handling APIs across hybrid deployments — which is where the earlier, effective but centralized iterations of API management began to show their limits.
For example, some businesses kept backends and APIs in their own data centers while running API management and analytics services in the cloud, a tactic that could introduce latency — and ruin experiences for end users — as data was round-tripped to the cloud and back. Other businesses experimented with full installations of API management software wherever they had workloads, which solved some problems but occupied a massive footprint — often dozens of servers — and could increase operational complexity. On-prem, cloud, virtual machines, containers, monoliths, microservices—the variety of technologies that involved APIs only continued to grow.
These kinds of challenges have spurred innovation. For example, some businesses are now addressing hybrid API management via microgateways: lightweight, federated cloud gateways that keep API runtimes in the data center while pushing analytics data to a management layer in the cloud. Micro-gateway capabilities are generally not as robust as those available in full API management installations; the footprint needs to be smaller so the micro-gateway can be more flexibly deployed. But they ensure that the benefits of API management — i.e., enterprise-grade security, visibility, analytics, and policy definition and enforcement — are brought to every API in the organization, not just a subset.
API management is not unique in these trends. Whether for technical, business, or regulatory reasons (or all three), enterprise leaders want deployment flexibility and streamlined operational models across their IT portfolios — a desire evidenced in the enthusiasm around containers, microservices, and other building blocks of the cloud-native world. But even if API management’s evolution mirrors those elsewhere in IT, hybrid API management arguably presents more complex challenges than many other hybrid technologies because hybrid API management shouldn’t just apply to some APIs in some clouds but to all APIs, everywhere — from APIs exposing legacy mainframes to APIs used to manage microservices in the cloud.
This complexity may be daunting, but more than any single new feature, use case or approach, the shift to hybrid architectures has suggested an ongoing, continuous trajectory for the API management industry: API management needs to get small, adapting from a big footprint for a data center to the lightweight portability and simple deployment and scalability of containers.
That’s not to say enterprises aren’t still deploying centralized API management installations in their corporate data centers — they are, and they’re generating value from them. But API management built for the future must enable distributed deployments because the APIs that need to be managed are increasingly decentralized.
This means that someday very soon, we expect all API management capabilities will be deployable in a container, flexible and portable as anything else in the cloud native world. Why? Because APIs are everywhere, all APIs need to be managed, and to maximize security and performance, APIs should be managed next to the code that produces them. That is the business need and the vision of the API management industry.
These are major transformations — but to be clear, there is no specific destination on this journey. Embracing containers alone involves enormous variety, such as the ability to better integrate API management capabilities with popular open-source projects like Kubernetes and Istio. The journey will continue — and there’s no going back to the old world in which API management just acted as a gatekeeper to a handful of centralized APIs.