Which agile methodology should junior developers learn?
Agile methodology breaks projects into sprints, emphasizing continuous collaboration and improvement.
Scrumban (a combination of Scrum and Kanban)
Extreme Programming (XP)
Other methodology
Bah, Waterfall was good enough for my elders, it is good enough for me
Junior devs shouldn’t think about development methodologies.

Painless Automated Patching for Windows and Linux

Jul 25th, 2019 10:01am by
Featued image for: Painless Automated Patching for Windows and Linux
Feature image via Pixabay.

Patching has always been a major pain point for IT. Manually patching systems is labor-intensive and error-prone. Centralized information rarely exists, which makes coordination of downtime difficult. To add to the difficulty, patching processes among various operating systems differ wildly.

There are also many different interpretations of what patching means, but for the purposes of this article, the definition we will use is: “applying changes to computer software with the intention of resolving functional or security bugs, improving usability, reliability or performance.”

What we’ll discuss: A way to utilize ad-hoc manual task orchestration, such as with Bolt, to patch systems at scale consistently and confidently. The more time you save by removing manual work, the more time you can focus on your next great project.

Addressing Vulnerabilities with Patching

Tony Green
Tony has been a UNIX systems administrator for over 25 years, in those times servers weren't pets, they were more like children. His goal has always been to automate himself into obsolescence and Puppet is the tool he's using to make that happen. Tony has worked in the finance, telecommunications and media industries, where he helped develop people, tools and services. He is now leading the DevOps practice for Katana 1, a Puppet partner in Sydney Australia.

One of the biggest concerns addressed with patching is dealing with security vulnerabilities that can put your organization, network and users at risk. However, these vulnerabilities can be hard to manage and fix. Tracking down affected services requires manual effort, specialized tools, custom tool development or a combination of all three approaches.

Applying patches to any vulnerable services and servers can be a challenge as well, requiring a great deal of manual work. Centralized control of the patching process by the IT team is common. This can work well, but more challenges can arise. With busy and in-demand network services, IT needs to prioritize patches and find a good time for them to be applied.

Self-service options solve some of those issues, but open up others. Training, access control and enforcement of standards come to mind as possible difficulties that come with a self-service solution.

Even after patching is complete, IT needs to validate the success of all patching jobs and ensure that reporting systems are updated with the new state. They also must make sure that all stakeholders have access to the patch state of the servers with data that is both timely and accurate.

Until recently, tools didn’t exist that were able to meet all of the expectations and requirements of intelligent, useful automated patch management.

Building the Right Tool

Puppet has already provided much of the framework necessary to create a strong patch deployment and reporting tool. With Puppet, we can easily centralize data while keeping it accurate. It also provides a role-based access control (RBAC) system and the ability to trigger ad-hoc jobs on nodes. In addition, Puppet can be controlled via an API and a web console.

A module was needed in order to meet the following requirements:

  • Report the patch state on a server, via custom facts, back into PuppetDB
  • If possible, report on which updates are security-related
  • Assign servers to patch window groups to facilitate scheduling
  • Set blackout times for servers, preventing any patching activity
  • Trigger post-patching reboots when necessary
  • Execute patching jobs on pre-defined groups of servers, which must also:
    • Clean package caches
    • Restrict applied patches to those related to security
    • Supply override arguments for OS package commands
    • Trigger patching tasks from the command line, console or through an API
    • Control who can execute a patch run
  • Store canonical patching state data on each individual node.

The final requirement was one of the most important. The source of truth for node patching state needed to be stored on each individual node so that all patching information could be rebuilt from the nodes if necessary.

These requirements are met by the Puppet os_patching module. This module is now fully functional on Linux (RedHat, Debian and SUSE). Windows support was added in release V0.11.0. Support for other operating systems is currently under development.

To enable the module, simply declare the os_patching module onto each node. This sets up a scheduled task to refresh the patch information and allows access to the necessary tasks to execute patching.

Managing Server Facts

Custom facts provide the backbone of the os_patching module. Some of these facts are generated by scheduled jobs and others by the os_patching class itself. All of these values are cached in order to keep server load from impacting performance. The module refreshes all values once per hour, and this interval can be changed through configuration.

Several categories of facts are collected.

State facts store the current state of each node and are used to answer the following critical questions:

  • Are there patches to apply?
  • How many patches are ready to apply?
  • Are they security related?
  • Does the node need to be rebooted?
  • Do any applications or services need to be restarted?
  • Is patching currently possible?

Control facts configure the execution of patching processes and jobs. They answer the following questions:

  • Are there any blackout windows defined?
  • Is the node allocated to a patching window?
  • Is this node overriding the reboot parameter?

Together, state and control facts provide all of the information needed to audit the nodes and control patch automation.

Individual nodes can be grouped by assigning them to a patch_window such as “Group 1” or “Week 4.” Blackout windows can be defined to freeze changes or to prevent individual nodes from being patched.

Using these facts, it is possible to write queries such as this:

This task patches all nodes assigned to the patch window “Week3” that are not blocked and have patches ready to apply.

Additional Configuration Settings

Other options provide further customization of patching tasks:

  • patch_window: used to tag groups of servers
  • blackout_windows: dates and times during which updates are blocked
  • security_only: enables only the security_package_updates packages and dependencies to be updated
  • reboot_override: overrides the task’s reboot flag
  • dpkg_options/yum_options: a string of additional options to dpkg or yum.

Managing and Controlling Rebooting

In almost all cases, patching requires restarting all services and applications touched by any patched libraries or files. This can be as simple as restarting a single application or as intensive as requiring a full reboot of a server. The os_patching module accepts configuration options to control this behavior.

The “reboot” parameter accepts the following values:

  • always: Always trigger a reboot, regardless of task status or success
  • never: Never trigger a reboot
  • patched: Trigger a reboot if any patches have been applied
  • smart: Use OS-specific tools to determine if a reboot is required after patching. This usually just triggers a reboot when the kernel or core libraries are updated

The “os_patching.reboot_override” fact can be used to customize behavior on a granular level. This allows patching jobs to only reboot subsets of servers.

This flowchart shows the decisions made by the os_patching module based on configuration and available facts.

Reporting and Output

All task output from the os_patching module is visible from the command line, through the console, or through the API.

The reporting fields consist of the following:

  • pinned_packages: any packages version locked or pinned at the OS layer
  • debug: full output from the patching command
  • start_time/end_time: timestamps describing when the task started and finished
  • reboot: the reboot parameter used
  • packages_updated: a list of affected packages
  • security: the security parameter used
  • job_id: On RedHat servers, the yum job ID
  • message: any additional status information provided by the task.

Getting Started

To get started with the os_patching module, follow these steps:

  • Add mod “albatrossflavour-os_patching,” “0.11.0” to the Puppetfile and deploy the control repository
  • Classify the nodes to be patched with the os_patching module:
    1. Include os_patching within a patching profile
    2. Use the PE console:
  • Run Puppet on these nodes. You should see the following changes:
    1. The file /usr/local/bin/ will be installed: (c:\programdata\os_patching\os_patching_fact_generation.ps1 on Windows)
    2. Cron jobs will be setup to run the script every hour (using fqdn_rand) and at reboot
    3. The directory /var/cache/os_patching will be created
    4. /usr/local/bin/ will run and populate /var/cache/os_patching
    5. A new fact (os_patching) will be available
  • View the contents of the os_patching fact on the nodes you classified:
    1. facter -p os_patching
    2. puppet-task run facter_task fact=os_patching –nodes
    3. Use the console to view the fact:
  • Execute a patch run on these nodes:
    1. puppet task run os_patching::patch_server –query=’nodes[certname] { facts.os_patching.package_update_count > 0 and facts.os_patching.blocked = false }’
    2. Run the task through the console:

Automating Manual Tasks Allows for More Time to Focus

The accessibility of automation tools, such as Puppet and Bolt also frees up time for those who would otherwise have the burden of adding patches manually. This guide on patching Linux systems at scale is just one way of many for engineers and developers to stop doing soul-crushing manual work and innovate on automation and processes to give us valuable time back.

What do we do with this potential newfound time? Well, that’s up to us. Until we get to that point, let’s keep on automating and innovating together, one great module at a time.

Thank You to These Contributors

The os_patching module wouldn’t exist without contributions from the following individuals:

We want to give special thanks to Yasmin Rajabi and her team for the amazing work on Tasks and Puppet Bolt.

Group Created with Sketch.
TNS owner Insight Partners is an investor in: Pragma.
THE NEW STACK UPDATE A newsletter digest of the week’s most important stories & analyses.