Contributed / Op-Ed / Technology /

The Many Problems with Jenkins and Continuous Delivery

5 Jun 2017 7:00am, by

Micha “mies” Hernandez van Leuffen
Micha “mies” Hernandez van Leuffen is a hacker entrepreneur, and the founder and CEO of Wercker, where he’s building the next generation of developer automation for the Modern Cloud.

If you work with software, you may already have come to the realization that the practice of software delivery is far from perfect today. Check out our article last month on the evolution of software development since the turn of the century, for some reasons why. In that article, we explained the paths that led us to the software world we live in today, and why that world may fall short of our expectations.

This post is the second in our series about moving into a post-Jenkins world. Here, we take a more detailed look at the popular tools and processes, including but not limited to Jenkins, that are holding us back from software nirvana.

We Like Jenkins!

To be clear, we don’t see Jenkins as the sole source of trouble in the software delivery world today. We actually think Jenkins is a great tool. However, Jenkins and other continuous integration (CI) servers are not always used properly. Software delivery teams tend to make mistakes in how they deploy Jenkins and tools like it. As a result, they adopt inefficient practices, undercut their ability to attain or retain agility, and lose the flexibility they need to adopt the newest technological innovations.

Jenkins’ Problems

Ultimately, the root of these problems stems not from specific tools, but from cultural mistakes.

Problem 1: Jenkins has too many plugins

Plugins are not necessarily bad things. In fact, when they’re used properly (which means extending functionality beyond the core features required of a software platform) they’re great resources. They give users the choice of adding extra features to the tools they use, without requiring them to dedicate resources to those features if they don’t wish to use them.

But for Jenkins, plugins do not provide access to optional functionality extending beyond the core features required to use the platform. Instead, Jenkins requires teams to use plugins to achieve tasks that, in many cases, are really quite basic.

For example, if you want to build for a Docker environment — which is a pretty common use case these days — you need a plugin. If you want to pull from GitHub (another pretty common task), then you need a plugin. If you want PAM support, you need a plugin.

To be sure, many of Jenkins’s 1,500 plugins provide functionality that not everyone needs. It makes perfect sense for it to provide via plugins, for example, PagerDuty or Azure Storage compatibility, because many users may not have a need for these functions.

But the fact that you need plugins in Jenkins to do just about anything is problematic — and not only because it means software delivery teams have to spend time installing and configuring them before they can start working. The bigger issue at play here is that most of Jenkins’ plugins are written by third parties, vary in quality, and may lose support without notice.

Building a software delivery chain based on third-party plugins is not a good way to ensure availability or stability.

Problem 2: Jenkins was not designed for the Docker age

Although CI servers are often part of the modern DevOps conversation (and are indeed one of many important tools for DevOps engineers), they are actually a relatively old technology, dating back to the early-to-mid 2000s — long before anyone was envisioning containers and microservices as the infrastructure of choice for software deployment.

As a result, traditional CI servers don’t do much to help teams take full advantage of next-generation infrastructure like Docker containers. They integrate with Docker rather awkwardly, via multiple plugins. Actually, Jenkins has no fewer than 14 different plugins with Docker in their names. Many are for Docker-related platforms from specific vendors, but six of them are for the core Docker platform.

In many senses, Jenkins, like most other CI servers, was built in the age of bare metal servers and virtual machines. It tacked on Docker support after the fact. In an increasingly Docker-native world, this is not a good way for a CI server to operate.

Problem 3: Jenkins does not support microservices well

Just as Jenkins and most other CI servers were born in the pre-Docker age, they also emerged long before microservices became popular.

Sure, some were working with service-oriented architecture (SOA) in the 2000s, at the same time Jenkins was first used. And concepts like microkernels have been around since the 1980s. But until Docker came along and made microservices easy to implement, very few microservices platforms had actually been deployed.

So you might not expect Jenkins to do a good job of supporting microservices — and indeed, it doesn’t. Jenkins lacks support for integrating and testing multiple services at once. That’s essential functionality for a microservices environment.

Unless you plan to invest in the overhead of multiple pipelines (with one for each microservice), Jenkins does a poor job of helping you develop next-generation microservices apps.

Problem 4: CI != CD

Most likely the biggest problem with Jenkins, and CI servers in general, is that software delivery teams sometimes conflate continuous integration with continuous delivery (CD).

In fact, CI and CD are different things. CI is a part of the CD process, but to achieve full CD — which should be the goal of any software delivery team aiming to optimize its workflows — you need more than just a CI server.

CD also requires release automation into the environment you happen to be working with, whatever that is. It requires tools, such as Steps, that can automate software delivery tasks falling outside the purview of CI servers. CD requires communication tools and channels to enable the software delivery team to collaborate seamlessly.

When organizations set up a CI server, and immediately consider their software delivery modernization work done, they are making a big mistake.

Changing the Culture of the Jenkins World

Why do skilled software engineering teams make mistakes like these? It’s not because they’re unintelligent or failing to keep up-to-date with the latest innovations.

Instead, the problem lies with misguided attempts to emulate the biggest, most efficient software delivery operations, like Google and Netflix. These organizations famously leverage open source tool chains and massive infrastructures, to build incredibly agile software delivery pipelines.

What enables those companies to build those pipelines is not just the tools they deploy, but also their culture. You can’t become as efficient as Google simply by using the same tools as Google.

Smaller organizations don’t always realize this. Only when they have the right cultural philosophies and processes in place can they overcome the limitations of a tool like Jenkins, and optimize their software delivery pipelines.

No toolchain is perfect, but you can achieve software delivery perfection (or something close to it, at least) when you implement the right culture.

If your approach to software delivery is still built around Jenkins alone, you’re undoubtedly missing out on opportunities to do much better. Enabling those opportunities requires cultural change. In the next post in this series, we’ll examine how forward-thinking companies are combining new tools with a new culture of software delivery, to move past the inefficiencies of the Jenkins-centric world with which we have grappled for so long.

Any questions? We’re always interested in feedback, email us.

Wercker is a sponsor of The New Stack.

Title image:  Soviet women participate in the construction of the Saratov-Moscow gas pipeline, approximately 1944.  Licensed under Creative Commons 4.0 by WarAlbum.ru.


A digest of the week’s most important stories & analyses.

View / Add Comments

Please stay on topic and be respectful of others. Review our Terms of Use.