Microsoft and Red Hat’s KEDA Brings Event-Driven Autoscaling to Kubernetes

As event-driven architectures become more common, there have been a number of projects designed to bring serverless to Kubernetes. Azure Functions already runs on Kubernetes but like the other functions runtimes, it’s been using the standard horizontal scale up and scale down of containers, based on the CPU and memory consumption of a container. That doesn’t fit the event-driven nature of serverless.
“When Kubernetes is trying to decide how many pods to run on each machine it’s just looking at how much CPU and memory is being consumed,” explained Jeff Hollan, senior program manager for Azure Functions. “It’s very reactive; it’s trying to guess how many instances will be needed.”
That’s too slow for scaling serverless functions which need to react to signals coming from the event source rather than waiting until system resources are struggling, and it doesn’t reflect how you might choose to do load balancing. Azure Functions, for example, scales by watching the event queue and if a queue has a hundred messages waiting, it spins up four or ten instances rather than just one, no matter what the CPU load looks like.
KEDA — Kubernetes-based event-driven autoscaling — is a new open source project from Microsoft and Red Hat aimed at bringing that kind of scaling to Kubernetes, enabling containers to scale from zero to a thousand instances based on event metrics like stream lag or how long a queue is.
KEDA acts as an agent to activate and deactivate deployments, and it acts as a Kubernetes metrics server exposing event data to the Horizontal Pod Autoscaler. “It’s an additional component monitoring your event source and feeding that data back to Kubernetes system so Kubernetes knows about your queue and your event hub,” Hollan explains. “You do a one-time install of KEDA on your cluster and KEDA monitors new containers that are KEDA enabled and scales them like a function.”
You can set how often KEDA polls for new messages and how it should scale up and cool down, and then the deployments handle the events directly, so events don’t have to be converted to HTTP requests, which can lose content and prevent direct communication to the event source. Similarly, using KEDA doesn’t require any changes to the deployed code (although code does need to be wrapped in a Docker container).
KEDA already works with Kafka, Azure Queues, Azure Service Bus, RabbitMQ, Azure Event Grid and other Cloud Events (which connects it to many more services). Microsoft will add triggers for Azure Event Hubs, Storage, Cosmos DB and Durable Functions, and event sources are also extensible by the community.
KEDA can scale any container or deployment and other serverless projects could take advantage of it, but the open source Azure Functions runtime already integrates with it.
Red Hat and Microsoft have both been contributing to the upstream KEDA project, and Red Hat is also using KEDA to scale Azure Functions on the OpenShift Container Platform as a developer preview. That integration is built with Red Hat’s Operator Framework toolkit and will be available in the OperatorHub.io gallery later this year. “It’s designed to behave the same way it does when running on Azure as a managed service, but now running anywhere OpenShift runs, which means on the hybrid cloud and on-premises,” explained Red Hat’s William Markito Oliveira. “Users of Azure services and other cloud providers can send events through their services and process those events with Azure Functions in a portable way, reducing lock-in concerns.”
“In practical terms KEDA enables applications to scale based on demand by polling a queue or topic from an event source that lives on the cloud, such as Azure Queues, or on prem, such as Kafka,” Oliveira noted in a follow-up email. “As an example, when a user adds an item to a shopping cart, KEDA will be actively monitoring the shopping cart and trigger a container. This is a very simple example but it gets really interesting for systems with bursty and unpredictable characteristics. When a shopping cart system has an increase in demand and 1000 people add multiple items to it, KEDA will auto-scale your application to match such demand. When idle after processing those events, it scales down, even going back to zero containers, which is one key benefit of serverless solutions.”
“We know a lot of people are anxious about lock-in and thinking about how to run in different environments,” Hollan noted. “KEDA is about taking the Functions programming model and all the productivity it brings and you can run it wherever makes most sense — and that might be Kubernetes. If you want to take your toys and run in a different cloud, you can do that and we’ll give you the tools to do it.”
Beyond HTTP
“A lot of the time we think about autoscaling and scale to zero functions as HTTP centric,” Gabe Monroy, head of Microsoft’s cloud native compute team told the New Stack. “It turns out in Azure Functions that 60 to 70% of executions are event-driven, not HTTP-driven. Things like Event Grid, a service bus and those different queuing systems that are providing events; that’s a very common pattern.”
For events like HTTP that are pushed to the container — which make up the other 30% of workloads Functions sees — you can use Knative serving or the open source project Microsoft announced last year at Kubecon, Osiris alongside KEDA. “KEDA is event-driven scaling for Functions, Osiris is HTTP- driven scaling for Functions. Osiris is the HTTP version of scale-to-zero, KEDA is the event-driven, queue-driven, custom metric-driven version,” explained Monroy.
That’s a deliberate strategy of building open source components that are “small, scoped and do one thing well” to give the Kubernetes community options they can mix and match to get what they need; an approach that he suggests hearkens back to Docker’s “batteries included but removable” promise.
“Allowing these things to be assembled and composed separately is key to making sure that we get the open source ecosystem to maximize this opportunity, because if you build stuff that’s too big it ends up being hard to treat it as a library,” he points out.
Serverless Kubernetes: a Cross-Industry Project
Red Hat wanted to have the Azure Functions experience brought to OpenShift, Monroy told us. “RedHat is looking at this whole pool of customers who are betting on Kubernetes who want to be building Functions-style apps. Things like Knative and some other technologies are certain cuts at this, but none of that stuff has the level of polish and the level of production usage that Functions brings to the table. Functions has been container native since almost day one.”
The combination of KEDA, Open Shift and Functions gives you the choice of running serverless as a managed service in the cloud, as an integrated option in Open Shift or DIY on any Kubernetes cluster, wherever it is.
Last year, Kubernetes co-founder Brendan Burns said that the future of Kubernetes is serverless. Enterprise customers are already asking for that, says Monroy. “They want Kubernetes services that don’t have the virtual machine overhead, that are pay per second, that have the fast scaling capabilities.”
If that turns out to be what a majority of Kubernetes adopters want, then it’s important for it to be based on a project with multiple organizations involved suggests John Montgomery, director of Microsoft’s developer division. “It can’t just be one vendor doing this.”
“Serverless is a game changer in deep, deep ways,” Montgomery told the New Stack. “Serverless capable of running in a container with the developer experience we’ve been working on for the past 5 years gives you developer productivity with the power of serverless. And then you run on top of Kubernetes and all of a sudden you get this cloud scale capability to spin up and destroy containers. KEDA can also leverage all the other things we’ve been doing with AKS like the serverless containers work. You get transportability, you get the spin-up-spin-down of containers plus the spin-up-spin-down of Functions plus the developer productivity. Put all that together and that technology is a game changer for a class of applications — but I think we’ve just begun to figure out what that class of application is.”
Red Hat is a sponsor of The New Stack.