Rezilion sponsored this post.
The “pets vs. cattle” metaphor in DevOps could be accused of having jumped the shark, but in the world of vulnerability management, every vulnerability is still a pet. While every company has their own metrics for MTTP (Mean Time To Patch), industry consensus is that it takes at least 38 days to patch a vulnerability — and possibly as long as 150 days. To figure out your MTTP, multiply the number of vulnerabilities found during your last scan by however many business days it takes your team, on average, to remediate a single vulnerability.
Inertia is the enemy of DevOps. The vulnerability dilemma creates problems on both the left and right sides of the CI/CD spectrum. On the right, time spent remediating vulnerabilities either forces services and applications to be decommissioned or creates windows of opportunity for attackers. On the left, vulnerabilities create risk debt and force developers to choose between features and security.
Vulnerability Anxiety Is Real
Each vulnerability that your scanner finds generates work. First, what is the risk associated with the vulnerability? We already know that CVSS scores don’t tell the full story, because attack chains often begin with lower-scoring vulnerabilities that are easier to exploit. Is the vulnerability in a mission-critical service? Is it in a VM, a container, or in code? We then have to figure out who to assign it to. That person has to figure out if there’s a patch yet; and if so, what’s the performance impact of the patch? Will patching one thing break something else in production? And that’s just the beginning.
So, we live in a world where security and DevOps are inundated with more vulnerabilities than they have the time or resources to patch. Vulnerability Prioritization solutions bring analytics and vulnerability intelligence to reduce the resource requirements of performing vulnerability management. There are good prioritization tools out there that help identify which vulnerabilities are actively exploited in the wild and which ones have been patched successfully without performance impact. Some solutions even have predictive modeling, to predict which vulnerabilities are more likely to be weaponized and should be remediated on priority.
Prioritization is very useful as a triage mechanism, but even the lowest-ranked vulnerability eventually has to be dealt with, right? For example, a vulnerability with a low CVSS score that isn’t actively being exploited in the wild still needs to be remediated at some point; because if there’s a CVE, someone’s going to exploit it. That vulnerability might be on the lowest rung of the triage ladder, but someone will eventually need to deal with it — it’s remediation debt. Or is it?
It Doesn’t Matter if a Vulnerability That’s Not in Runtime Is Exploited in the Wild
Think about it: If there’s an FPGA driver in your Kubernetes container and that driver has a vulnerability with a CVSS score of 9, which is actively being exploited in the wild, a vulnerability prioritization mechanism may triage it as a high priority. But then you dig in and find that there are no FPGAs in your environment, so that driver will never be loaded into memory and thus does not represent a threat. Whereas if you have an NGINX vulnerability with a CVSS score of 2 that isn’t exploited in the wild but is loaded into memory, then wouldn’t you agree that it represents a much bigger risk than the FPGA vulnerability?
Before we triage vulnerabilities, doesn’t it make sense to figure out if those vulnerabilities are actually relevant to our specific environment?
Rather than prioritize based on objective risk, the ability to filter based on actual, contextual risk would seem like a necessary first step in the prioritization workflow. Before prioritizing which vulnerabilities need to be mitigated, let’s filter out all the vulnerabilities that will never be exploited — and then sort them in order of risk.
One Question: Does This Vulnerability Exist in Runtime or Not?
Sometimes adding a cog can optimize the entire assembly line. By inserting a validation step into your vulnerability handling workflow, you could cut down the amount of remediation work your team needs to do — while concurrently reducing your attack surface. This one step could streamline your remediation efforts, by focusing your team’s efforts on vulnerabilities that represent actual (rather than perceived) risk.
How can we quantify this optimization? You may already be familiar with research we’ve conducted proving that 67% of the vulnerabilities with “high severity” scores in the top 20 containers in DockerHub are never loaded into memory. Among our customers, we’ve seen that number rise as high as 75%. Think about it, 75% of the vulnerabilities identified by your vulnerability scanner may be utterly benign and pose zero threat. If one were prioritizing vulnerability management based on attacks in the wild and CVSS scores, they would run the risk of spending upward of 70% of their time and effort on vulnerabilities that posed no risk to their production environment.
Wouldn’t it be great if, before you started assigning vulnerability prioritization and remediation work, you know which vulnerabilities actually represent a threat to your apps and services? How would that affect your MTTP? And, equally important, how much faster would your DevOps teams be able to deploy if vulnerabilities didn’t constantly fail their builds? You’ve got tools that automate vulnerability scanning and prioritization but, if that automation isn’t saving you from “patch debt,” maybe it’s time to invest in vulnerability validation.
Feature image via Pixabay.