Securing the Software Supply Chain with SLSA
We all know that the software supply chain is vulnerable. Attacks rose a staggering 650% in 2021 when compared to the previous year — for a total of 12,000 malicious incidents, according to Sonatype’s 2021 State of the Software Supply Chain report.
And it’s likely to get worse: Gartner predicts that “by 2025, 45% of organizations will have experienced attacks on their software supply chains, a threefold increase from 2021.”
But addressing this growing challenge remains a significant unmet need in applications security.
Against this backdrop, Google proposed Supply-Chain Levels for Software Artifacts (SLSA, pronounced “salsa”) in June. Inspired by the vendor’s internal “Binary Authorization for Borg,” process, which has been mandatory for production workloads at Google for decades, SLSA is a framework for ensuring the integrity of software artifacts to prevent attacks.
Why Is the Software Supply Chain Under Threat?
There are three major reasons for the shift from attacking production systems to attacking the supply chain, according to Ronen Slavin, co-founder and chief technology officer at Cycode, a company that focuses on software supply chain security.
The first has to do with visibility. In many organizations, the security teams are not responsible for the tools that make up the CI/CD pipeline. “Modern development approaches have brought many tools, which are implemented and run by engineering without security’s involvement; this creates a visibility gap,” Slavin told The New Stack via email.
“Moreover, engineering teams have different priorities to security teams. Not surprisingly, engineering teams set up engineering productivity tools with an emphasis on developer agility and feature velocity, while security is often an afterthought.”
A second reason for the rise in supply chain attacks has to do with the attack surface, with the tools themselves becoming attack vectors. “These attacks are more damaging than traditional breaches,” Slavin told us.
He laid out four reasons why the new wave of attacks are so much more damaging:
- Trends like everything-as-code and GitOps make it much easier for attackers to move laterally or compromise the entire software development lifecycle (SDLC) once gaining a foothold.
- This type of attack usually bypasses standard security measures that most organizations rely on — like firewalling, web application firewalls (WAFs), air gapping, subnetting, VPNs, etc.
- Supply-chain attacks often spread downstream to breach customers.
- It is usually more difficult and time-consuming to recover code after experiencing supply chain attacks because the level of a breach is often deeper, wider and more sensitive, so rebuilding requires a more thorough process.
“So, from an attacker’s perspective, the ROI on such attacks is much higher,” Slavin concluded.
A third issue is a lack of tooling; there are relatively few tools that can see across the different phases of the software development lifecycle. “The SDLC is siloed from the point of view of security, with little to no ability to connect the dots,” Slavin said.
How Does SLSA Work?
The SLSA framework operates on the principle that, as its official documentation states, “It can take years to achieve the ideal security state, and intermediate milestones are important.”
Given this, it defines a graded approach to adopting supply chain security for your builds. In summary:
- Level One: The build process must be fully scripted/automated and generate provenance — metadata about how an artifact was built, including the build process, top-level source, and dependencies. This level doesn’t prevent tampering, but it does offer a basic level of code source identification and can aid in vulnerability management.
- Level Two: Requires using version control and a hosted build service that generates authenticated provenance. At this level, the provenance prevents tampering to the extent that the build service is trusted.
- Level Three: The source and build platforms meet specific standards to guarantee the auditability of the source and the integrity of the provenance, respectively. In theory, this provides much stronger protections against tampering than previous levels by preventing specific classes of threats, such as cross-build contamination, but it does require some sort of as-yet-undefined accreditation process to work.
- Level Four: Requires a two-person review of all changes and a hermetic, reproducible build process. Hermetic builds guarantee that the provenance’s list of dependencies is complete.
This incremental approach is key, according to Joshua Lock, a staff open source software engineer at VMware who is working on SLSA.
“For open-source projects especially, we can all say having SLSA level four is the ideal, but we can also recognize that for the majority of open source projects, built and maintained by one person in their spare time, that’s just not attainable,” Lock said.
“So we can choose to say, “OK, I’m comfortable with level two or level three maybe, and I’ll just stop there, and maybe I’ll go further later if the project really takes off.”
SLSA to Help Guide Code Policy
Likewise, by knowing the SLSA level, developers and organizations can reason about a software package or build a platform’s security posture and make decisions about what to trust.
“The initial goal is to get to a state where a company can say, ‘Hey, I’m not going to deploy any code in my production system that doesn’t meet SLSA level three,’ for example, and have that as a strict policy. Or that their vendors need to meet higher SLSA levels before signing a contract,” Kim Lewandowski, who worked on and launched SLSA when she was at Google and has since founded Chainguard, told The New Stack.
By itself, this is significant, as Lock told us: “When you look at cloud native systems and the hundreds or thousands of dependencies that they are pulling in, being able to imagine a future where you can say, ‘OK, I don’t want anything to hit my cluster that’s not SLSA level three or above’ is really powerful.”
To put this another way, “SLSA is really an articulation of a lot of existing best practices,” according to Lock.
“I draw the analogy that a lot of the SLSA requirements are things Linux distributions have been doing for a long time,” he said. “So a Linux distribution will take a copy of the source code they’re building; they won’t rely on the upstream location not changing or not being tampered with.
“They usually have quite a rigorous process for accepting new package maintainers — so there’s the ‘trusted contributor’ idea. You write a kind of recipe for packaging your software, and then you push that code somewhere, and the central distribution machinery builds the package, and it goes through various levels of testing and quality assurance before it’s introduced into the stream that the average distribution user can pull in. This maps to the SLSA build requirements; being defined as code, using isolated environments, and so on.”
Verifiable Metadata to Check Security Claims
A challenge, however, is how do consumers know that they can trust the claimed security level? SLSA aims to solve this conundrum through the automatic creation of verifiable metadata.
“That is one of the differences with SLSA, the way I see it — it’s not just a list of requirements and best practices,” Lewandowski said. “You actually have to have the data, the verifiable data that you’re producing, so that a consumer of a package can see the actual data that shows that it meets the different SLSA requirements.”
“SLSA is really an articulation of a lot of existing best practices.”
—Joshua Lock, staff open source software engineer, VMware
Slavin echoed this idea: “The standard focuses a lot on the build process and its security, which is a huge step forward. It introduces the notion of provenance, which is a way to attest to the build process and pushes the idea of reproducible builds, which is also a very important step towards a secure pipeline.”
A further problem SLSA addresses, Lewandowski said, is that “there’s weaknesses along that entire supply chain. And we’ve seen attacks at every single point over the last several years.”
In view of this, the SLSA framework provides a threat model for the software delivery supply chain, which covers both source and build integrity and shows how SLSA could help.
It should however be emphasized that SLSA is very much still a work-in-progress and there is active discussion about what measures ought to be within scope. The threat model highlights a number of areas that currently aren’t covered by the spec, including risks such as code review bypasses, or administrators changing security configurations to push malicious code.
In addition, as noted previously, tooling for supply chain security is still rather nascent. But initiatives such as sigstore, open source tooling which aims to make the cryptographic signing of artifacts very easy and developer-friendly, show promise, particularly when combined with the SLSA spec.
For readers wanting to make a start, the SLSA project has published a “getting started” guide which can help you reach level one. Chainguard has also started publishing a number of blogs (here, and here) that provide useful additional information. Separately, the Cloud Native Computing Foundation keeps a living, community-maintained document: the Software Supply Chain Security Paper, which aims to provide the community with “a series of recommended practices, tooling options, and design considerations that can reduce the likelihood and overall impact of a successful supply chain attack.”