Cloud Native / DevOps / Kubernetes

Kubernetes-Autoscaling KEDA Moves into CNCF Incubation

24 Aug 2021 5:00am, by

KEDA, the Kubernetes Event-Driven Autoscaler project, has moved on from the sandbox tier at the Cloud Native Computing Foundation (CNCF) this week, joining the 21 other projects in incubation, such as Argo, Falco, gRPC and Rook.

First created in 2019 by Microsoft and Red Hat, KEDA joined the CNCF in March 2020, and since then has seen the release of KEDA 2.0 and been adopted by companies such as Alibaba, CastAI, KPMG, Meltwater, Microsoft and others.

KEDA consists of two primary components, the KEDA agent that activates and deactivates Kubernetes deployments to scale to and from zero, and the metrics server, which exposes event data to the Horizontal Pod Autoscaler to scale out. KEDA can be added to any Kubernetes cluster, providing event-driven autoscaling based on data as provided by a scaler, which serves as an integration between KEDA and a variety of databases, messaging systems, telemetry systems, CI/CD and more.

During its time in the sandbox, KEDA increased the number of available scalers from 15 to 37, and KEDA maintainer Tom Kerkhove says that more are on the way. Currently, applications can scale according to basic things like CPU or memory, but also information provided by an Apache Kafka topic, for example, or by Prometheus metrics, and Kerkhove says that an HTTP-based autoscaler is also in progress.

Beyond adding to the number of scalers, KEDA also spent its time in the CNCF sandbox rearchitecting its security approach to separate out authentication, adding TriggerAuthentication and ClusterTriggerAuthentication.

“For example, if you want to reuse that identity across multiple applications, if you want to have the separation between dev and ops, or if you want to use secrets from a Hashicorp vault, by using TriggerAuthentication, you can do that,” said Kerkhove.

Similarly, ClusterTriggerAuthentication means that “one person can define how to authenticate with, let’s say Microsoft Azure, and then everybody inside the cluster can use that identity if they want to,” explained Kerkhove.

In its move to the incubation tier of the CNCF, Kerkhove said that the due diligence process helped the project iron out some governance issues. For example, now maintainers from the same company share their votes, which helps to prevent any one company from having a majority.

Looking ahead, KEDA has many plans, including a potential adoption into the Kubernetes project itself, but Kerkhove said that this is still off in the distance.

“Eventually we want to do this and more, but making a change in Kubernetes is hard for a good reason, because it’s a sustainable product. You need to make sure that if you change it, you will not break anybody,” said Kerkhove.

One thing holding KEDA back from this end goal, explained Kerkhove, is that KEDA can only run as a single instance on Kubernetes, which means it cannot be highly available. This is because of a Kubernetes limitation, and Kerkhove said that “what we’re trying to do is look at that Kubernetes limitation and see if we can fix that, so that both Kubernetes and KEDA now benefit from it.” For now, the project will continue iterating on its own while considering making upstream contributions of parts of the project.

Other potential plans, said Kerkhove, include separating out part of the Service Mesh Interface (SMI) spec, a fellow CNCF project, and broadening it beyond just use for service meshes.

“We’re trying to see if there’s a place in the community to introduce a new standard for traffic metrics so that KEDA can rely on one specification and basically serve the full customer base and with all the scenarios,” said Kerkhove. “We want to take that traffic metric API, take it out of the SMI spec and create a traffic metrics spec.”

One final near-term goal, said Kerkhove, was to do predictive autoscaling.

“There’s nothing started there yet, because we only came up with the plans recently, but it’s certainly something we want to do as the next major feature,” he said. “This comes back to using data to be cost efficient and saving the environment by doing so.”

On saving the environment, Kerkhove noted a panel of interest at the upcoming KubeCon+CloudNativeCon North America 2021 in October: “How event-driven autoscaling in Kubernetes can combat climate change.”

A newsletter digest of the week’s most important stories & analyses.