Is Serverless Just a Stopover for Event-Driven Architecture?
I recently reviewed the State of Cloud Native Development report by SlashData supported by the Cloud Native Computing Foundation that shows a decline in cloud native technologies from the first quarter of 2020 to the first quarter of 2021. Should I venture a guess, if they conduct a similar poll in the first quarter of 2022, at best serverless use will be steady but will be more likely to fall as a percentage of overall developers.
This makes me think that the hype around serverless has died down, though that doesn’t mean that serverless isn’t going to be around for a long time.
Perhaps as CNCF CTO Chris Aniszczyk told SDX Central, “The trend reflects growing concern that serverless technologies lack the flexibility needed for widespread adoption and a reluctance among organizations to commit to specific technologies or providers.”
This got me to reflect on another cloud trend from a few years back, platform as a service (PaaS). For a while, AWS Elastic Beanstalk, Heroku and Cloud Foundry were touted as the future of application deployment, but they too suffer from a templated approach that lacked the flexibility to address the broader needs of enterprises.
This is very similar to today where serverless itself may be a more limited subpattern case of a much bigger trend in event-driven architecture.
Serverless Isn’t a Failure, It’s an Implementation Detail
Serverless adoption grew very quickly early on, with massive adoption among Node.js users who wanted to deploy their applications quickly in the cloud, primarily on AWS Lambda. Soon thereafter, Lambda and other function-as-a-service providers offered more runtimes and virtually any modern code or even some long-deployed legacy code could be executed in serverless frameworks.
Serverless does illustrate many desirable traits. It is easy to scale up and scale down. It’s triggered by events that are pushed rather than via a polling mechanism. Functions only consume resources based on that job’s needs, then exits and frees up resources for other workloads. Developers benefit from the abstraction of infrastructure and could deploy code easily via their CI/CD pipelines without concern as to how to provision resources.
However, the point that Aniszczyk alludes to is that serverless isn’t designed for many situations including long-running applications. They can actually be more expensive to the end user than running a dedicated application in containers, a VM or on bare metal. As an opinionated solution, it forces developers into the model facilitated by the vendor. In addition, serverless doesn’t have an easy way to handle state.
Finally, though serverless deployments are largely deployed in the cloud, they aren’t easily deployed across cloud providers. The tooling and mechanisms for managing serverless are very much specific to the cloud, though perhaps with the donation of Knative to the CNCF, there could be a serverless platform that could be developed and deployed with the support of the industry, much like Kubernetes has.
The point is that many of the things that have made serverless successful are traits that actually apply to a much more interesting and bigger trend in cloud native computing: event-driven architecture (EDA).
Event-Driven Architecture in Cloud Native Computing
I believe that the success of serverless had more to do with benefits that apply to a broad set of use cases that transcends serverless. First, the idea that systems would be able to act on data based on real-time changes is a big advantage of legacy batch-processing applications. The consumption of asynchronously generated events is nonblocking, allowing applications to continue to work without having to wait on a response. It’s also not necessary to poll applications as they are subscribers to the data and that lack of chattiness causes a reduction in network I/O.
I have spent a considerable amount of time thinking on this topic, and at TriggerMesh, we were early adopters of serverless. Our first thought was that a lack of consistent tooling across cloud providers would cause lock-in. This is happening today. Not only that, but not all serverless runtimes are created equally, preventing migration of functions from one provider to another. Additionally, very few of these cloud providers have a mechanism to trigger services in one cloud or on-premises to another. We originally looked at how we could publish serverless functions to different clouds, but this was cumbersome and required runtimes from one cloud to be ported to another. What we found was the ability to create data pipelines that made data actionable rather than to simply put it into motion was the real need of end users.
Beyond portability, we saw that the issue of triggering these functions was a nightmare the second you wanted to use inputs from a system outside your serverless provider to trigger functions. We realized that tools like Google EventArc and AWS EventBridge created a lack of flexibility in creating broader event-driven applications. So we developed an open source alternative to EventBridge that could not only send data streams to any serverless platform but could be used to more easily build data pipelines to flood data lakes and perform other types of data synchronization.
Event-Driven Data Syncs and Workflows
I believe that we are moving toward an event- and data-driven future, where the ability to act in real time on data is becoming a requirement for doing digital business. The first part of the equation requires data streaming technologies that are similar to AWS Kinesis but not specific to a single vendor. Apache Kafka and Apache Pulsar fit the bill as open source, cloud-agnostic ways to put data in motion. Then the next step is to adopt publish-subscribe communication across microservices rather than making REST calls to APIs.
The future of the cloud is not necessarily all-in-one vendors. We’ve been down that road before where users have sacrificed the freedom to choose the best-in-class solution for the convenience of a preassembled stack from one vendor who provided one “throat to choke.” The future is composable systems of best-of-breed technologies rather than stacks from a single vendor. The new design pattern for cloud native users is composable infrastructure and consequently composable applications that are an amalgamation of various vendors and connected via event streams that are used to create automated workflows.
In the survey conducted by Coleman Parkes called “The Great EDA Migration,” the majority of organizations, 85%, recognize the critical business value in adopting event-driven architecture, but adoption is still in its early days, as only 13% claim to have achieved full EDA maturity. According to this study, the three most common applications of real-time data are: application integration — the serverless use case falls under this; data sharing across applications; and connecting IoT devices for data ingestion and/or analytics.
For many reasons, the time to consider building event-driven architecture is now. If you are looking to increase the freshness of your data and improve your digital interactions, an EDA can yield improvements and the tooling to do so is better than ever. First, there are a growing number of free and open source tools that allow enterprises to create data streams; Apache Kafka and Apache Pulsar are at the top of the list. In addition, applications that are developed cloud natively can provide portability from the on-premises data to virtually any cloud offering Kubernetes. Finally, I think there are additional tools like TriggerMesh’s open source cloud native integration platform that provides multicloud capabilities to create and manage data pipelines that can be used to replicate that “EventBridge” capability from AWS. I believe that as enterprises grow hungrier for real-time information and the number of systems that need to consume that data grows, that the migration to the architecture pattern of event-driven architecture will grow as well. Luckily for them, there is a growing number of alternatives to doing so in a way that locks them into a proprietary cloud service or software vendor.