CVSS Struggles to Remain Viable in the Era of Cloud Native Computing
So where does the CVSS fit into the grand scheme of DevOps and cloud native workflows?
Or does it?
The Common Vulnerability Scoring System (CVSS) is an open industry standard for accessing the severity of a newly-found computer vulnerability. This system assigns scores to vulnerabilities, which in turn allows companies and developers to prioritize responses and resources, according to a particular threat.
This system scores each vulnerability between 0 and 10 (0 being the lowest threat, 10 being the highest). There is even a handy Common Vulnerability Scoring System Version 3.1 Calculator one can use to derive a base score for a risk.
But in a post-Spectre/Meltdown world, have we found ourselves in a position where we must choose performance over security? Consider Kubernetes and Docker. Both of these two technologies are far from immune to security issues. On top of that, the very nature of this technology is so far removed from the standard route to deployment that it’s become a challenge to even address some of the security risks.
To dig into this issue, I interviewed an old friend, Vincent Danen, Red Hat senior director product security, about this very issue. You might be surprised at what came of that discussion. The Forum of Incident Response and Security Teams (FIRST), the organization that manages the CVSS Special Interest Group, did not respond to queries to participate in the post.
The Biggest Problem with CVSS
I remember, a few years back, my first exposure to the CVSS system was centered around Android vulnerabilities. My initial reaction to the system was, “What the…?” Sure, I understand scoring a risk from 0-10. But then, after looking into how vulnerabilities were scored, it all seemed so vague, so subjective. And then came CVSS 3.1 and a handy calculator which does help to remove much of that subjectiveness, but (for whatever reason) I couldn’t give it my full confidence.
“Most non-security people fixate on the score alone without looking at the metrics that comprise the score,” Danen said. There’s a good reason for that: Those scores are what is typically presented to the public. When you read of a vulnerability that has a score of, say, nine out of 10, non-security people don’t need to know how that score was achieved. They know that it’s one away from the absolute worst possible score… which is not good.
Danen continued, “They don’t know what those metrics mean. Do they know the difference between Base, Temporal, or Environmental? Most developers use the base score alone, but there are more knobs to fiddle with.”
That’s key, because it also hits home the idea that those “knobs” give the CVSS that level of subjectiveness I mentioned. To that, Danen said, “The Base score doesn’t speak to a developer’s environment, operating system, or other technologies in play. The Environmental metrics are critical to use to adjust the score for their environment. This isn’t typically done.”
Environment. Funny thing that. If you look at the CVSS 3.1 calculator, you see glaring holes (with regard to DevOps and Cloud Native). Consider the Attack Vector. In that section you have the choice between:
There is no one mention of container, namespace, automation, or any other DevOps/cloud native technology. Because of this, how is it possible that a CVSS score could be correct for this new breed of tech?
CVSS with DevOps and Cloud Native?
If the CVSS system is to meet the needs of DevOps and cloud native technology, things must change. According to Danen, “CVSS needs a better way to describe compilation and deployment in its metrics. There should be something like a “vendor metric” for things like how it is compiled if there are hardening technologies used or other mitigations.”
The idea of adding a metric for how a piece of the puzzle is compiled is important. But think about the complexity of that metric:
- docker or docker-compose?
- kubectl minkube, k8s, docker swarm…
- custom or pre-constructed YAML?
You see where that’s going.
Ultimately, however, Danen says this is not just on the shoulders of CVSS. He states, “CVSS comes with a manual that describes its use. It should be required reading. In a DevOps process, base metrics alone should be considered invalid if used alone.” To clarify this, Danen continues, “Temporal and Environmental metrics are required to start talking about risk considerations, which is where CVSS seems to be typically used. Even then, it shouldn’t be used in risk conversations — it’s a guide or a way to determine a prioritized list to remediate when comparing vulnerabilities.”
Danen firmly believes that base score alone is insufficient, partially because CVSS is used in scanning products. The issue is that those products don’t include deployment models. That’s a key element, because (as Vincent says), “a vulnerability in a dev environment is different than a prod environment, and the Environmental metrics would highlight that.”
According to Danen, this is not being done, which is causing considerable confusion in the industry. He states, “As well, a reliance on scores by the National Vulnerability Database (NVD), which accounts for the worst-case scenario across all applicable configurations and platforms, makes it even worse.”
“One vendor may use hardened compiler flags that would render a vulnerability ineffective, meaning it might turn a remote code execution issue into a denial of service,” Danen said. “NVD, using one CVSS Base score, doesn’t account for this. Some people use NVD as a single source of truth and opt to use it rather than a vendor score which makes this problem even worse.”
Although work is constantly being done to help refine CVSS, Danen claimed, “it continues to iterate to solve problems that were not anticipated in earlier versions.” However, he also insists that if CVSS were used for its intended purpose, it would be reliable. This, of course, goes back to Environmental and Temporal metrics, which (Danen claims) is not reliable, as it isn’t being used as it was intended.
To this claim, Danen said, “It was designed to indicate the severity of a flaw relative to other flaws. Nowhere will you see it described, by FIRST who created it, as a means of assessing risk. So yes, reliable to describe the mechanics of a vulnerability, but wholly inadequate to describe the risk of the vulnerability to a particular organization or environment.”
Reliable Security Information
Where should we turn for reliable security information. This question is especially true with DevOps and cloud native technologies. When the CVSS system doesn’t function within that realm (or, at best, is only capable of scoring the individual pieces of a system… not how they function together in deployment), the only place to turn is the vendor. According to Danen, “…no one knows the product you’re trying to assess better than the vendor. NVD certainly does not, nor can it.”
This is made more apparent when you consider that scores can differ between vendors because “software is built, configured, and deployed in different ways, particularly when it comes to open source software,” Danen said. “The PSIRTs (Product Security Incident Response Teams) for software vendors typically assign CVSS scores — where available, developers and businesses should use those Base metrics provided by the vendor and, in conjunction with their own assignment of at least the Environmental metrics, come up with an accurate score that suits their use of a particular piece of software.”
This becomes challenging with DevOps, because a typical pipeline could make use of numerous software from numerous vendors. Is it then on the developers to reach out to each vendor to access vulnerability scores for everything in the chain? That’s a time suck most developers can’t deal with.
According to Danen, Red Hat has invested a significant amount of time and energy providing as much security metadata as possible to the public. “The primary view of this information is through our CVE database in our Customer Portal. Every CVE that impacts a supported Red Hat product is listed there.” Danen also adds that Red Hat’s Product Security Engineers “independently assess every vulnerability and assign a set of CVSS Base metrics from which a user can build upon to get a CVSS score relevant to their use of the product.”
Danen also warns that “CVSS isn’t written in stone — we often do a quick triage assessment when we first learn of a vulnerability and will refine the CVSS metrics, if needed, as we investigate the vulnerability.”
In the end, it’s all about communication between the vendor and the customer. For Red Hat, according to Danen, “…this openness and transparency allows our customers to build an understanding of an issue so they can make their own risk decisions based on the data. This is a fundamental part of our mission to our customers.”
The lesson? Until CVSS makes some considerable changes to include DevOps and cloud native technologies, it’s on the developer and businesses to go the extra mile and access the risks associated with the technologies used in their pipelines.
Hopefully, soon, CVSS will adapt to the new world order and include metrics that cover superfast workflows. Until then, do your research and stay up to date and in the know about every piece of software you use in your DevOps or cloud native pipelines.
Red Hat is a sponsor of The New Stack.
Feature image by Gordon Johnson from Pixabay.