We tend to talk about “CI/CD” (continuous integration / continuous delivery) as though it were one thing, and there wasn’t a slash between them. Even when Jez Humble and David Farley published a book on the latter to distinguish it as a goal unto itself, the book’s presentation presents it as building on the “foundation” of CI, or a “natural extension” of CI. You don’t have to look much further to find CD presented as the “logical evolution” of CI. More than once, Microsoft has presented CD as the magic button you push when you’re done with CI. And last and/or least, IBM has presented CD as CI except with a “D.”
When it’s explained properly, continuous delivery is not at all the rapidly repeated process of pushing software out the “Exit” chute. It is actually a complex chain of events ensuring the distribution of properly tested, working code into production, and then managing its lifecycle. In a highly continuous environment, old code dies quickly, but it’s not the product that’s dying so much as shedding its old skin.
Incredibly, it’s here that we come to realize that Jenkins, the most visible face of CI/CD principles in organizations — so synonymous with the topic that I’ve heard more than one CIO call CI/CD itself, “you know, the one with the butler” — has not officially maintained one preferred process for automating continuous delivery pipelines. Even though Martin Fowler, credited as the originator of CI, defines CD as requiring a “deployment pipeline,” implementing such a pipeline with Jenkins involved following one or more of a plethora of recipes.
“The benefit of Jenkins’ near-infinite extensibility, as it’s been used in different places in the industry over the past seven to eight years, is that people have actually been implementing continuous delivery pipelines with Jenkins,” said R. Tyler Croy, a veteran contributor to the Jenkins project and a co-implementer of Jenkins with Puppet, in an interview with The New Stack. “They’ve been sort of hacking it together with what was already there.”
One Delivery, Not Twelve
On Tuesday, CloudBees — the steward of commercial Jenkins — announced general availability of Jenkins 2.0 to the community. With this release comes the first officially supported implementation of one domain-specific language (DSL) for the coding of pipelines for continuous delivery — for “pipeline-as-code.”
Put another way, the name that has become synonymous with the very concept of continuous delivery is now implementing it for real.
“Instead of having to cobble together these different pieces as you might have before,” said Croy, “now Jenkins knows and speaks the language of a delivery pipeline natively. It didn’t before.”
Croy himself has been one of the many knowledgeable demonstrators of the principle of delivery pipelines in Jenkins (not to be confused with “deployment pipelines,” as so many often do). But this principle has been demonstrated using a variety of scripting languages: CloudBees has shown off CD with Groovy; others have advised using the build tool Ant (XML-based build files); others have demonstrated Jenkins with the build tool Gradle. Some of these build tools and scripting languages have relied upon Jenkins’ Build plug-in while others have relied upon a completely different Jenkins’ Delivery Pipeline plug-in.
So it was continuous delivery, in the sense that a great many people were delivering a cornucopia of methodologies to accomplish much the same thing.
“The nice thing about this as an implementer — as someone who is a practitioner of continuous delivery with Jenkins,” said Croy, “[is] that it also allows me to take what was previously an implicit model of how my software gets delivered and define it very explicitly in source code that I can check in, and I can track, and I can audit, the same way I can [with] every other piece of my software.”
The Value of Process
Croy’s prominence in the Jenkins community is a testament to his familiarity with the processes and best practices that members of that community must have shared with him. By his count, he told us, there are about four or five de facto best practices for how continuous delivery processes are being chained together, without the use of Jenkins’ pipelines metaphor. These are separate groups from those organizations that do use Jenkins 1.x, and have attached a plug-in and a scripting language, yet still omit the pipeline metaphor. Both these classes clearly perceive continuous delivery as something greater than continuous integration, except this time for real.
But Croy said he believes these organizations may already have adopted these models like the ones best suited to their purposes. Indeed, when I asked him whether some of those organizations would be likely to guard and even covet these processes as their intellectual property, Croy said, in some cases, yes.
Netflix has already demonstrated that a delivery process can be a company’s value-add, he said, even if it’s done so very publicly. But that value is quantifiable.
“It takes a lot of effort to get from point A to B,” he said, “and every delivery process is going to be different depending on your product, how your software is built, how it comes together. It’s not like someone is going to take what Netflix has done and suddenly be a competitor to Netflix. But unlike five or ten years ago where the delivery pipeline and process was viewed as something that only operations people or developers cared about, Netflix has shown that it can actually, dramatically change the stature and position of the business, by allowing them to move faster and faster — which is a definite competitive advantage.”
It’s what makes pipelining (and the processes that have substituted for it, up to now) more and more like business process management — a way of not only encoding how things get done in business but protecting it like a secret message. Croy acknowledged that Jenkins 2.0 won’t push these businesses into conforming to some single, self-declaring standard for defining these processes. Instead, adopting Jenkins 2.0’s new declarative DSL may compel these businesses to reconsider their approaches, and quite possibly simplify them.
“That is, to me personally, why getting continuous delivery into Jenkins is so important,” said Croy. “It really does change the nature of how you deliver software, and how you think about it, for the better.”
But I also shared with Croy the view of practitioners in the field who openly profess that “delivery” is an end stage of the development process, or what many call “dev complete.” While proponents maintain that the hallmark of CD is maintaining a state of perpetual readiness, skeptics maintain that if software is always ready, then it’s never really ready.
“‘Is software ready?’ means very different things to your individual contributor, your project manager, your product manager, your QA manager, and your operations team,” R. Tyler Croy explained to The New Stack. “The readiness state of software means very different things to every person in the line who’s going to touch, and be involved with delivering software. Bringing that to a more explicit model, like with pipeline-as-code in Jenkins 2.0, gives you collaboration points that didn’t previously exist.”
Those collaboration points are where a project is ready to make a transition from one environment to another: for example, moving from the developer to the QA engineer, who may need to run a smoke test. Then from the QA perspective, software may be “ready,” but only insofar as moving to the next stage. Eventually, a product manager may be at a point to declare the product “ready” to appear before a customer. Even then, in the context of CD, software may never be “done” — just ready to evolve to the next stage.
“Pipelines allow you to express all of that, which is something you don’t really have in a lot of tools right now,” said Croy. “But as you start to move more rapidly as a software business, it becomes a lot more important that you have a grasp on what ‘ready’ means.”
Feature image in the public domain from Pixabay.