CloudBees sponsored this post.
Jenkins has emerged as a powerful — and the most popular — tool for achieving continuous integration and continuous delivery (CI/CD). According to a CNCF survey, Jenkins owns 58% of the market share in the CI/CD software platform market.
The platform’s popularity, among other things, is due largely to its stable architecture, and especially, the depth of CI/CD capabilities DevOps teams can rely on for their production pipelines. CloudBees and the open source community also continue to improve upon and expand the plugins and processes for offer, including a Configuration as Code plugin that can greatly improve a GitOps workflow.
However, Jenkins can be a challenge to set up, configure and manage if you don’t use best practices. There are ways to avoid some of the common struggles and headaches when you build and use Jenkins pipelines. Many users also may not realize how Jenkins has become that much more manageable — thanks largely to the support of the open source community and the committers. Some users may, for example, focus on perceived Jenkins shortcomings without realizing that solutions to their problems now exist. In many cases, the adoption of more recently introduced processes and tools not only make Jenkins easier to install and manage, but can also enable DevOps teams to take advantage of its more powerful CI/CD capabilities.
It has been commonplace for some users who have been exposed to Jenkins for over a decade to continue to rely on processes they used at the beginning of their Jenkins tenure, R. Tyler Croy, a Jenkins project board member, said.
“When people come back to Jenkins at a new job, they say, this is how I did it five years ago when I used Jenkins, so I’m just going to do that again — and they get themselves into trouble,” said Croy. “I think it’s a common problem with any mainstream or common piece of software — but a lot has changed over the past few years.”
In this post, we describe how to better take advantage of Jenkins as an essential tool for developing and delivering software.
You Really Have to Try Configuration as Code
The option to rely on the Jenkins Configuration as Code plugin has existed for several months, but it is often overlooked as a way to solve a number of CI challenges, specifically when defining a Jenkins configuration, as the plugin’s name implies. The challenges include automating and simplifying Jenkins pipeline configurations and managing the associated plugins for those. The beauty of Jenkins Configuration as Code is that the process is simplified, involving only submission of a pull request that is then reviewed before it is deployed as the configuration is defined in a simple YAML file.
“This is by far just the coolest — I can’t shout from enough rooftops, about Configuration as Code,” said Croy. “I was extremely skeptical at the beginning that this was going to work, but it worked.”
Once the Configuration as Code plugin is added, a code representation of the Jenkins controller configuration is automatically generated.
“If you’re trying to automate the process, this is exactly where you start, because all of your configuration is now as code, so you can put this into a repository, start building your Docker containers and go from there,” Croy said. “The process just did not exist before Configuration as Code became available.”
When a Plugin Makes Sense and When It Does Not
The choice of plugins can certainly have an effect on the functioning of a Jenkins pipeline, either in a positive or negative way.
In the Jenkins world, a vast number of plugins exist. Depending on how you tally the total, there are more than 1,500 plugins for which updates are actively issued. Each time a plugin is installed, it is necessary to consider how it will be a part of the Jenkins server system, and how it will be used for multiple products, projects, and in some cases, teams sharing space on the server.
“So, if you update the pipeline there is some sort of change in behavior — or why else would you have updated it?” Glick said. “You need to make sure changing behavior is not going to suddenly break something that was working before for another team.”
A rule of thumb is that if a plugin is going to affect the Jenkins server build, then it is best avoided in that case.
“The server build is something that should be part of your project or your build scripts,” Glick said. “That’s not the concern of the CI system, but the concern of the developer.”
Additionally, plugins typically have different release schedules and dependencies, while they also interact with or depend on other plugins, said Shawn Smith, DevOps engineer at nVisium. “As you add more to your setup, the amount of time you’ll spend testing and performing updates will grow rapidly,” Smith said.
Don’t Be Too ‘GUI’
The magic of opting for a declarative versus a scripted syntax with a GUI can make editing and improving the syntax of a Jenkins pipeline much easier. However, the declarative option can sometimes serve as a too-easy solution to problems or fixes that are best written as scripts. Some developers and administrators might even opt for a GUI because they lack the programming skills to create their own scripts.
However, taking advantage of Jenkins’ programmable pipelines — available in CloudBees CI — or the actual code that is available can help to build stability into the pipeline with fewer problems once in use.
“The biggest benefit you get when coding your own pipelines includes a single source of truth, allowing you to know how things are built, and that acts as a backup when needed,” Haimovitch said. “Pipeline components can be reused, keeping things uniform and reducing boilerplate.”
Applying a declarative syntax as a way to edit or configure a Jenkins pipeline “is suitable for most projects that you would encounter,” Glick said. “A scripted syntax allows you to define explicitly what it is you would like to do.”
A scripted syntax can be useful for allocating computing resources and selecting different agents for Jenkins pipelines, for example. “You are able to pre-select certain things and accomplish results,” Glick said. “You are literally writing a program” for the specific needs of your Jenkins server. This involves “a steeper learning curve for sure, but it also lets you do things that are a lot more dynamic.”
Jenkins pipelines also have built-in programmable support for containers, including Docker and Kubernetes, which DevOps teams should take advantage of, Haimovitch said.
“When it comes to more complex workloads, such as integration tests, you can even execute multiple containers simultaneously,” Haimovitch said. “While it takes some extra work, you’ll benefit from unparalleled flexibility and performance.”
Take Advantage of Those Multi-Branches
There is certainly a lot of overlap between Jenkins and Git repositories, especially as Git functionality continues to extend way beyond just serving as a repository. Git options, as well as GitOps, can also help solve the complexities and mismatches plugins can pose for Jenkins pipelines.
The use of multi-branch pipelines, such as those that Git repositories can offer, can serve to remove “important state from Jenkins and into version control on the pipeline,” Bryce Larson, a senior SRE at SaltStack, said.
“If you want to make a new job, it’s as simple as pushing a new branch to GitHub and that branch is up and running how you want it, Larson said. “If you have a problem with people not rebasing or keeping up with the master branch where your Jenkins pipelines are correct, use pipeline libraries. It makes it so you can update the pipeline definition separately from updating the GitHub branch.”
Bring It All Together with GitOps
Jenkins’ complexity, with the flexibility it offers in return, accounts for why many organizations have increasingly integrated their CI/CD processes and associated Jenkins pipelines with GitOps, Torsten Volk, an analyst for Enterprise Management Associates (EMA), told The New Stack.
The resulting integration should result in managing Jenkins plugins, scripts and workflows in the same way as standard application code, or in other words, “DevOps-as-Code” under a GitOps management umbrella.
“It is key to funnel all changes to any aspect of the Jenkins setup through pull requests. This enables you to enforce standards for compliance, reusability, resource consumption, cost and risk,” Volk said. “At the same time, adopting GitOps for Jenkins makes automation workflows available across the enterprise and enables anyone to contribute enhancements, instead of just doing their own thing.”
However, the key to successful GitOps for Jenkins — and any other technology — lies in a gradual approach toward implementation, while “always keeping an eye on DevOps engineers — not feeling like you are slowing them down by forcing them into a rigid code-control regimen,” Volk said.
“The beauty of a platform like Jenkins is that you can basically do anything very quickly, and the more you have used it in the past, the more use cases you will find in the future,” Volk said. “Harnessing all of this creativity by ‘catching’ the corresponding code inside of Git repositories can bring your automation game to the next level.”
This year’s free-to-attend DevOps World is one not to miss. Register today to watch more than a 100 technical and business sessions, led by industry thought leaders. Take part in over 40 training and workshop opportunities and keynotes.
And tune in at 7:30 a.m. PST on Tuesday, Sept. 22, for The New Stack’s livestream coverage of the event after the day-one keynotes! TNS founder and Publisher Alex Williams will talk with Shawn Ahmed, senior vice president and general manager of the Software Delivery Automation Group at CloudBees, and distinguished engineer at Broadridge Daniel Ritchie. To watch, go to The New Stack’s Periscope channel.
At this time, The New Stack does not allow comments directly on this website. We invite all readers who wish to discuss a story to visit us on Twitter or Facebook. We also welcome your news tips and feedback via email: firstname.lastname@example.org.
The New Stack is a wholly owned subsidiary of Insight Partners, an investor in the following companies mentioned in this article: Docker.