Zero trust. It certainly doesn’t sound in touch with our self-organizing, agile software development culture. Only organizations built on trust can be successful. After all, happiness at work is directly linked to productivity and profit. So, if we know all this, why is Zero Trust a trending topic?
Because we fail at security time and again. As our systems become increasingly distributed and complicated, so do its malicious attackers.
In fact, a recent report by technology services provider Probrand found that 43% of UK businesses have suffered a data breach in the past year, with a staggering 72% of these infiltrations coming through unsecured wireless devices, including printers, scanners, phones and laptops connected to their Wi-Fi network. The world of bring-your-own device only makes it more challenging.
What really matters is who and what can access what data.
Yet there is a growing acceptance of Zero Trust as the enterprise modus operandi alternative to firewalls. Role-based access control (RBAC), service meshes and more are working behind the scenes to stop data breaches and systems disruption in their tracks. And of course, automation and orchestration can speed the DevOps pipeline.
And by implementing a codebase of Zero Trust, perhaps you can then start to nurture a culture of trust.
What Is Zero Trust?
“All data breaches are the exploitation of the old broken trust model. And almost all cybersecurity incidents exploit the trust model as well,” said John Kindervag, founder of the Zero Trust Model and field Chief Technology Officer at Palo Alto Networks.
The Trust Model is the perimeter-centric approach to security. Bordered by firewalls, you trust internal traffic by default. As a former penetration tester, Kindervag contends that, for most orgs, once you get inside the Level 3 layer network — and he says there’s always a way — then you have access to almost everything because of “that broken trust model.”
The Zero Trust motto is “Never Trust. Always Verify.” Every request to access a network resource having to be authenticated and authorized. At all seven security layers.
You start by identifying your sensitive data, and then map how those data flow. You build what Kindervag refers to as a Protect Surface, and then enclose this smallest subset of an attack surface with microperimeters. He says you move your controls as close as possible to the Protect Surface. And then very few people are given multifactor authentication access to cross in or out of the microperimeter.
All data and access must be approved by a policy. Zero Trust involves:
- Avoiding default configurations
- Continuous monitoring of users and logging activity
- Monitoring all network communications
- Two-factor authentication
- Security automation and orchestration
Kindervag says that it works the same way if you are legacy on-prem or using the public or private cloud.
Then as you revisit and tweak your Zero Trust Policies, they should be aligned with overall business outcomes, to make sure Zero Trust speeds up, not slows down your rate of secure release. This practice also makes sure you have a focus on identifying and protecting business-critical elements. Basically, your site reliability engineer should always be at this table.
It’s important to remember, all of these policies and implementation must be defined by a cross-organizational working group. The folks in charge of servers, virtualization and compliance should be joining your SRE, your devs and your organizational leadership from the start.
The Zero Trust strategy all comes down to defining segmentation. Kindervag argues that a lot of orgs fail at network segmentation because they do it in an ad-hoc way, which leaves the network less useable, less powerful and with increased latency. This is why he advocates for abstracting the infrastructure behind a segmentation gateway.
When segmentation is done right, you get better performance.
Kubernetes: Not-so-Secure by Default?
There’s no doubt that microservices and containers naturally lend themselves to segmentation. But, again, the more distributed a system is, and the more cooks in that kitchen, the more challenging it is to keep locked down.
Ben Hall, founder of the Katacoda learning platform for cloud native developers, shared his experience of proactively attacking and defending Kubernetes and Docker clusters at CloudNative London. He looked to offer some of the latest security approaches teams need to be aware of when running cloud native systems.
Hall said that misconfiguration is one of the leading causes of security problems, like what happened at Tesla last year, when someone hacked the data center’s control plane due to a lack of password protection of the Kubernetes Dashboard.
“Kubernetes can be the most secure platform in the world but if you leave your things open to the outside world people will find and exploit it” if you use the incorrect configuration, Hall said.
One example is that when you make a Kubernetes request without a token, it will consider the request to be anonymous. If the following configuration had been used during the setup of the cluster, then anyone could access cluster, Hall warned.
The risks of misconfigurations are increased when users are still learning how the system operates. A lot of users ask for advice in Kubernetes forums. Someone may suggest an action to get you past an error. But then, while it works as a workaround, you’ve now put your systems at risk — like removing your RBAC policies to ease your immediate pain, which has the side effect of removing all security protections.
Hall says RBAC is definitely a stronger way to go, but warned that it’s still confusing within Kubernetes. By using RBAC within Kubernetes, you are changing it to “least privileged as required.”
Highlighting the risk of utilizing underlying container features without understanding the security impacted, Hall said that “Privileged containers are bad — if I manage to exploit your application, I can then map the disk, and I can access all secrets and even write my own secrets.”
For example, if you need access to the Kubernetes API, you can create service accounts with the required roles and permissions. However, if you don’t need access, you should completely disable the token mounted by default. Just be careful to make sure the RBAC permissions aren’t providing too much information, like the following sample code that if applied to a service account, could potentially reveal more secrets than anticipated:
- apiGroups: [“*”]
But Hall reminds you if you don’t need access to the API, you ought to disable it.
Really as a Zero Trust Rule of Thumb — if someone doesn’t need access to something, cut it off. Right away. And then automate that security policy and apply layers of protection using PodSecurityPolicy and the OpenAgentPolicy ensure misconfigurations aren’t applied accidentally.
Hall offers up Kube-Bench from Aqua Security as a useful tool for an automated security scan to ensure your cluster has been configured according to best practices. Based on a CIS report, it converts the report findings into a executable set of rules for your Kubernetes cluster.
Cloud Native Computing Foundation recently conducted an assessment of the core vulnerabilities in Kubernetes. They included:
- Policies may not be applied, leading to a false sense of security.
- Insecure TLS is in use by default.
- Most components accept inbound HTTP.
- Most components do not enforce outbound HTTPS.
- Credentials are exposed in environment variables and command-line arguments.
- Names of secrets are leaked in logs.
- No certificate revocation.
- seccomp is not enabled by default.
Security Automation, Service Meshes and Zero-Trust Networking
In an interview on The New Stack Makers podcast, Reuven Harrison, Chief Technology Officer and co-founder of security firm Tufin Technologies, said that, while Zero Trust is a great idea, it can be nearly impossible to maintain manually.
A service mesh — an overlay network that sits between services — arose as a solution to clarify Zero Trust. From the developer’s perspective, it’s transparent. This means this security logic is built in their application but these developers are still able to see the business logic of it all. The service mesh usually comes with monitoring and visibility into performance between the security. And, most importantly this sidecar service acts as secure barrier, encrypting the traffic between services and offering visibility and monitoring — all at the micro-segmented level.
Istio is the most popular way of accomplishing this because it allows for control of:
- Traffic flow
- Service deployment on Kubernetes
Of course, Harrison reminds us that it’s essential that this infrastructure is driven by the security policy “that controls who can talk to who, what is allowed and not allowed.”
Then these automatically generated policies are integrated within the service mesh using security policy management tooling like Tufin.