KEDA Brings Event-Driven Autoscaling to Kubernetes
The Kubernetes event-driven Autoscaling (KEDA) open source project has come a long way since its debut last May at the Red Hat Summit, when joint creators Microsoft and Red Hat first open sourced the project. Last November, it saw the release of KEDA 1.0 at KubeCon+CloudNativeCon North America, and now KEDA is in the first steps of donation to the Cloud Native Computing Foundation (CNCF), having proposed joining on as a Sandbox level project in January.
Jeff Hollan, a KEDA contributor who is also a principal project manager for Microsoft Serverless Azure Functions, called the release a “ready for production milestone” that he said has led to an exciting several months of growth and adoption, as evidenced by several factors.
“We had some good contributions along the way before we hit 1.0, but since then it’s just been a number of different customers and partners. Our community calls are constantly getting filled up with about a dozen folks who are using KEDA, asking questions on KEDA, and, what’s really exciting to me, contributing to KEDA. We’re getting so many pull requests a week adding some innovative new features that it’s really exciting to see,” Hollan told The New Stack. “In many ways, it seems to be accelerating. We’re really looking forward to the CNCF donation to keep the momentum going.”
KEDA offers an alternative method for scaling to Kubernetes’ standard method, which is to look at indicators such as CPU load and memory consumption of a container. From the KEDA perspective, this method is reactive, rather than proactive. KEDA attempts proactivity by instead scaling up according to indicators such as message queue size in event sources like Kafka, Azure Service Bus, or RabbitMQ, much like serverless platforms do.
Since hitting 1.0, Hollan says KEDA has seen the number of event sources available increase to nearly 20, with many being contributed by the community, and continuing to grow at the rate of two or three a month. Another feature that has been contributed by community members centers around scaling down, rather than scaling up.
“There’s been common consistent problems that folks in Kubernetes, in general, have faced — it’s not unique to KEDA necessarily — but one of the big questions we heard within the first few months of development was, KEDA works great to give me this really fast serverless scale within Kubernetes, but there’s a downside sometimes to when you scale super quickly and then you potentially scale back in really quickly. What if I don’t want you to spin it down that quickly?” said Hollan. “What if I am doing something like transcoding some audio that might take a few hours to complete? ”
The new feature, he says, was started by the community and uses something called a “Kubernetes Job” to identify workloads that should not be scaled down until they are complete. Essentially, KEDA was only focusing on the lead-in indicators, and once they were gone, it would start to scale down. Now, that functionality can be toggled off or on. One feature the project is looking at developing in the future, said Hollan, is to look earlier than event sources for scale indicators by using predictive technologies.
“What if every Friday at 5 p.m. the time card system is going to drop a bunch of time cards that need to get processed? Today, KEDA, as soon as those time cards start coming in, it’s going to preemptively start scaling Kubernetes. But what if even before that you had some machine learning or artificial intelligence running inside that’s like, maybe we could proactively scale,” said Hollan. “That’s something that’s in the realm of possibility and we’ve been working with some teams to start to explore what that might look like and how that could become a reality.”
As for the donation to the CNCF, Hollan says it is something that has been in discussion since last October, although it had been in consideration since the beginning of the project, and he sees it as a commitment to the community.
“In donating to the CNCF, that’s Microsoft and Red Hat and everyone saying, we have no ownership of this code, we don’t own the trademark, we don’t own the direction of this project,” said Hollan. “This is something that we really want to have the community help lead and to help drive, which is a vote of confidence in the community, but also we feel is the best way to write this type of software.”
Currently, KEDA is scheduled to present to the Runtime Special Interest Group (SIG) at its Feb. 20 meeting before it hopes to be presented to the Technical Oversight Committee (TOC) for a vote. The process for acceptance to the Sandbox level of the CNCF is a new one. While previously, Hollan explained, a project would simply present directly to the TOC before a two-vote requirement for acceptance, projects now need to present to a SIG before being considered by the TOC.
Looking ahead, Hollan said that there are a number of CNCF projects that KEDA hopes to work closely with, including Virtual Kubelet and the Service Mesh Interface (SMI), which he said was also in the process of being donated to the CNCF, and Kubernetes itself. Perhaps, Hollan suggested, there are patterns of scaling that today you need to install KEDA to handle that could be made a part of the Kubernetes core project.