Building on the “pets vs. cattle” analogy of software management, Dave Stanke, developer advocate for Google Cloud, speaking at DevOps World|Jenkins World 2019 in San Francisco earlier this week, added in a few more animals in reference to modernizing continuous integration and deployment (CI/CD).
Pets, of course, were servers pre-cloud. We cared for them individually. In the cloud, they’re livestock, known only in bulk, and the challenge is managing the herd. With Kubernetes, they’re more like a swarm of butterflies, which he pointed out is known as a kaleidoscope.
Emerging platforms he likened to beams of pure energy on “Star Trek.”
“When I talk to people about their CI/CD systems, they say they lag behind their production systems. They’re not as highly evolved,” he said.
He posed this scenario:
Even though you’ve moved to the cloud, you have Jenkins on an old prod box that lives under your desk. Or it used to live under your desk, but you’ve moved several times and it’s still there, somewhere.
“We’re using legacy DevOps systems on rapidly evolving modern production systems. These are sheepdogs. Sheepdogs are pets that manage livestock. And while we love them dearly, they’re holding us back. They’re fragile, they’re slow, they’re overburdened, he said.
The build is broken and you’re not sure whether it’s a bad commit or it’s the CI system. What about long build queues, overflowing hard drives? There’s this CI system, but only one person knows how to do it and they’re on vacation.
CI/CD plays an important role in achieving the key metrics between top-performing organizations and the rest found in the DevOps Research and Assessment (DORA) report — cycle time, release time, change failure rate and mean time to repair.
“With legacy systems, you spend a lot of time waiting for CI tasks to complete. They’re slow. They’re flaky. So we need CI that’s fast and scalable,” he said.
“Deployment also needs to be fast, but it also needs scalability. We need CI/CD that evolves along with the codebase. Fast, scalable, reliable, adaptable, secure. These are the things we want in our production systems. To achieve these, we need these qualities in CI/CD as well.
“We can get modern CI/CD, but that means we need to retire the sheepdog. We need something more like a robot cat.”
It doesn’t mean throwing out everything you already have. You can use incremental steps, he said, referring to the collaboration between Google Cloud and CloudBees.
Google Cloud’s Graphite team creates integrations with popular open source tools to make it simple to work on GCP. They work on the Turbo GCP provider, Ansible Module on GCP, CloudFoundry on GCP, and Logstack plug-in.
One of its priorities is making first-rate Jenkins plug-ins to connect to Google Cloud, he said. This includes OAuth plug-in for authentication, cloud storage for build artifacts, compute plugin for provisioning workers, and Kubernetes engine plugin to deploy GKE.
A big challenge for CI/CD is scalability. The system becomes overburdened during the day when engineers are working, but sits idle at night when they’re not. This is expensive and inefficient.
In the cloud, you can provision on-demand to build a template. Whenever the build queue kicks off, Jenkins spins off a worker just in time. You can spin up as many instances in parallel as needed at a time, only going to pay for instances when they’re actually doing work.
The Graphite team provides a plugin for GCE that automatically provisions and terminates compute instances. On-demand agents are easy to keep up to date. If there’s a security vulnerability, you don’t have to go around one by one. Just update the template. Every job gets a clean, patched machine.
In this way, the Jenkins master can still live under your desk. Yet it’s still more scalable and more secure.
What about Kubernetes?
Lack of consistency is one thing organizations have in common. They have lots of apps, built at different times, lots of technologies. Different languages, frameworks, tons of dependencies.
“This is hard on CI. Every app a different toolchain, so we load up workers with all sorts of tools, Java, Python, .NET. Multiple versions of each all crashing into each other. We overfeed these pets,” he said.
With Jenkins on Kubernetes, you get customized, containerized workers as code. Rather than having a pre-baked template that lives in cloud-config, your worker definitions are stated in version right along with the code they build. Each build has just the right tools for the job. No more fat pets, he said. If want to use some special language, just add a line to Docker file and that special tool will appear when it needs to. Disappear when that build’s done.
In hybrid environments, Google’s answer is Anthos, a managed infrastructure stack built on open source platforms like Kubernetes and Istio. Google Cloud and Cloudbees just embarked on a collaboration with Google to integrate Jenkins with Anthos.
Anthos provides a consistent substrate across heterogeneous environments, so the same software runs on GCP, in your data center, even on AWS, all with a consistent control plane, he said.
“This is really interesting for CI/CD. You could run your build in the cloud, take advantage of that scalable worker node for those spikey workloads, then deploy to production servers on-prem,” he said.
He went on to advocate for the Continuous Delivery Foundation, an organization focused on reducing fragmentation and developing durable standards in the industry. Its four projects are Jenkins, Jenkins X, Spinnaker and Tekton, which he called the robot cat we’re looking for.
Tekton is an open source framework for CI/CD that originated at Google. It runs on Kubernetes so it gets the cloud native benefits like scalability, encapsulation and declarative configuration, he said.
JenkinsX is Cloudbees’ implementation of Tekton. It natively runs on the system, so workloads are portable on any system that can run Tekton, which is any Kubernetes cluster.
With modern CI/CD, he said, it’s time to retire the sheepdogs.
CloudBees sponsored this story, written independently by The New Stack.