IBM Open Sources Razee, a Pull-Based Kubernetes Continuous Delivery Tool
When IBM set out to build its IBM Kubernetes Service (IKS) a couple of years back, it found that it was lacking on the primary component — a continuous delivery system that could provide the scale, speed and visibility needed for what it was doing. As Daniel Berg, an IBM distinguished engineer with IBM Cloud and Istio, explained in an interview with The New Stack, the team found itself in a conundrum that would only be solved by building its own continuous delivery (CD) software. IBM has open sourced the project, now called Razee, during last month’s KubeCon + CloudNativeCon in Barcelona.
“We fell into the pit, if you will, of building a set of microservices that we ended up delivering as a monolith. We fell into that trap,” said Berg. “Every team would build their individual microservices and develop them on a daily basis, but then when we would go to do our continuous integration and continuous delivery process, we tried to do heavy testing of all the components working together and then roll it out into our environment as a tested unit. That became extremely difficult and error-prone and complicated using our traditional automation systems.”
Berg says they set out to build Razee with the core tenets of speed, visibility, and scalability in mind. One primary difference for Razee is that it operates differently than other CD systems, in that it is pull-based, rather than push, and therefore provides self-updating Kubernetes clusters. This is done by inserting an agent into each cluster that can check back for rule updates and then update as needed using Kubectl and the Kubernetes API. At the same time, this agent provides the visibility into what’s running where whenever it checks back with the centralized system for updated. Berg explains that this is made possible through their bootstrap process, and through the use of feature flags, large swaths of clusters can be configured at once.
“With Razee, the rules engine is completely pluggable. So you can have a very simplistic rules engine, which is all statically defined using YAML finals, or you can have something more sophisticated like integration with LaunchDarkly, which gives you feature flagging and dynamic rule evaluation,” said Berg.
Razee’s origins may account for the fact that it does not operate as other CD systems might. Instead, Berg describes a tool that is tightly coupled with Kubernetes, which he says works with the various managed and bootstrapped Kubernetes systems available.
“We decouple CI [continuous integration] from CD explicitly. Razee is a distribution system. You could call it a continuous delivery system, but ultimately it is a Kubernetes resource distribution system,” said Berg. “It inventories what is distributed into those clusters. So you get a view of what it’s pushed out into those clusters and it’s a mechanism for driving the distribution of resources into the clusters, which means it can manage any resource in a Kubernetes cluster, including CRD, including controllers, including roles, role binding, deployment, services, you name it. If it’s a resource in Kubernetes, Razee can manage it.”
Diving a bit deeper, Berg said that, while the pull-based model isn’t entirely unique to Razee, the intelligence built into it is.
“There are others that do pull-based, but they’re generic automation utilities that you’d have to build up the smarts around. Razee is tuned specifically for Kubernetes. It’s not a general purpose distribution system. It’s not going to distribute code out to a VM as an example. It’s not designed for that,” said Berg. “It’s designed for Kubernetes. It’s not going to solve all deployment and distribution problems for people in hybrid cloud because it’s not designed for that.”
Indeed, the GitHub repository for the project describes the tool in specific terms as “an open-source project that was developed by IBM to automate and manage the deployment of Kubernetes resources across clusters, environments, and cloud providers, and to visualize deployment information for your resources so that you can monitor the rollout process and find deployment issues more quickly.”
Nonetheless, Berg said that the company has released it now with the hopes of finding out if there are other use cases that it may have overlooked along the way.
“We built this code internally for ourselves. We thought, let’s open it up so we could start having the discussion. We used it for a very specific use case. There’s probably others. And in order to learn about those other use cases, people need access to it,” said Berg. “So that’s why we made it open source, so we could talk about it in the open and get feedback. If there’s interest in contributing it to Kubernetes through a SIG [special interest group] or possibly the CNCF [Cloud Native Computing Foundation], we’re open to that conversation, but that’s not the main premise for why we pushed it out there. It’s to get it in the hands of end users, get some feedback, maybe evolve it in a direction that it wasn’t initially intended for.”