Rezilion sponsored this post.
What if I told you to close your eyes and imagine you have a Solarwinds Sunburst-like backdoor in your infrastructure right now? Would you panic? Should you?
I’m sure a few readers here remember the Bit9 (now part of VMware’s Carbon Black) breach of 2013, where malicious actors compromised whitelisting software from Bit9 — enabling them to push out legitimately signed malware. Every few years, a critical piece of infrastructure is discovered to be pwned and every security vendor pipes up to say, “Well, if they’d used our security widget, they wouldn’t be hacked!” Needless to say, that’s not very helpful.
These breaches are reminders that nobody is immune to risk or being compromised. And that’s okay.
“Everything fails all the time.”
Werner Vogels, CTO, Amazon Web Services
One of the core tenets of DevOps is: Design for failure. How can we apply that same principle to security?
To answer this question, I think it’s important to revisit the notion of “Desired State” in the context of health checks. Using health checks gives DevOps engineers visibility into the health of a service. “Health” here means some measure of deviation from the “desired state” — meaning whatever the developers define as healthy or normal. Depending on the type of health check and the service’s constitution, the result of a health check could be as simple as a binary “up” or “down,” or something more complex like “within a healthy range.” But what we’re really asking is this: is the service operating in its desired state?
To address deviations from desired state, we build some automations based on the output of these health checks. This frees us up from having to babysit every service in production. Those automations can range from sending a Slack message saying “Hey, something’s wrong,” to creating a JIRA ticket, or (if the service is detrimentally impacted) restoring it to its desired state. The more automatic this workflow is, the closer we get to the nirvana of DevOps: Immutability.
Applying DevOps Immutability Principles to Security with Desired State Enforcement
Let’s go back to our earlier thought exercise: a back door like Sunburst is in our infrastructure. The back door process or binary itself is not an actual threat to performance or security, because it in itself is doing no harm to our infrastructure. What will harm us if we don’t address the issue is the Teardrop dropper Sunburst downloads, but we’ll get to that next. My point now is this:
If we can live with services that have memory leaks and bugs that are still operating within our desired state threshold, then we should also be able to live with services that have vulnerabilities but are not breached.
Like health checks, desired state enforcement solutions perform a workload composition analysis on each service, by plugging into production workloads and the CI/CD pipeline.
You may already be familiar with software composition analysis tools. These products analyze applications, generally during the development process, to identify their components and any known vulnerabilities in them. Workload composition analysis tools apply a similar principle, but to the entire cloud workload. Workload composition analysis examines the provenance, relationships, dependencies, and privileges that exist among the various services and applications that make up the entirety of your cloud workload.
Workload composition analysis is what makes desired state enforcement possible because by solving for the “Where did this come from?” problem, it’s much easier to make trust-based decisions about what should and shouldn’t be running in production. This is a holistic alternative to manual policy creation, which requires security practitioners to attempt to reverse engineer or mindread developer intent, or attempting to establish a heuristic analysis that becomes stale the second it establishes a baseline.
Returning to the Sunburst-Teardrop scenario: The back door, Sunburst, came from an ostensibly trusted source (Solarwinds), but the dropper (Teardrop) did not. To conduct a successful targeted breach, attackers need to leverage the backdoor to run further malicious code and commands. The question then becomes: how can we ensure that code from untrusted provenance is not running in production?
Zero Trust is a security concept centered on the belief that organizations should not automatically trust anything inside or outside its perimeters, and instead must verify anything and everything trying to connect to its systems before granting access. However, until the advent of desired state enforcement, Zero Trust has been an elusive goal, because of the immense manual effort required to determine whether to trust a user, API or network seeking access to privileged resources.
Workload composition analysis harnesses the CI/CD pipeline and converts code and artifacts into policy, thus providing — with absolute certainty — the provenance of everything that should be running in production. So if what’s running in production doesn’t match the output of our workload composition analysis, then it’s not part of our immutable infrastructure and represents risk.
Without human intervention, desired state enforcement would event on the Teardrop dropper execution and mark our service outside the threshold of desired state.
Bad Guys Gonna Bad
Suffice it to say, if we have desired state enforcement, then back doors become another form of technical debt — let’s call it security debt. When a back door tries to download and execute droppers, the affected service alerts — reliably, because we know that the new code our service wants to execute isn’t part of our desired state. That alert can be tethered to a health check automation that notifies the appropriate parties of what’s going on or, in an immutable environment brings the service back to its desired state — “healthy.”
Feature image via Pixabay.