Where are you using WebAssembly?
Wasm promises to let developers build once and run anywhere. Are you using it yet?
At work, for production apps
At work, but not for production apps
I don’t use WebAssembly but expect to when the technology matures
I have no plans to use WebAssembly
No plans and I get mad whenever I see the buzzword
Kubernetes / Security

Laying the Groundwork for Kubernetes Security, Across Workloads, Pods and Users

At his KubeCon EU talk, Google Software Engineer Samuel Davidson lays the foundation to systematically reduce the risk vector of Kubernetes. The talk primarily offered container security fundamentals, but certainly had at least one or two tricks for even the most advanced orchestrators.
Sep 16th, 2020 10:31am by
Featued image for: Laying the Groundwork for Kubernetes Security, Across Workloads, Pods and Users

Kubernetes is notoriously not secure by default. Its open source flexibility is what attracts thousands of organizations to build on top of Kubernetes, but it also is what makes it a challenge to lockdown. There are constant updates to be made and holes to be patched. Defaults to turn on and off.

For his talk last month at the virtual KubeCon + CloudNativeCon EU, Google Software Engineer Samuel Davidson laid out the foundation to systematically reduce the risk vector of Kubernetes. The talk primarily offered container security fundamentals, but certainly had at least one or two tricks for even the most advanced orchestrators.

Kubernetes Workload Security for Pods

Speaking as a member of the Google Kubernetes Engine (GKE) security team, Davidson offered the first rule: Assume that you will be owned — that attackers will overtake your system at some point.

He said, “You’ve got to assume that there is a yet-to-be-discovered vulnerability in one of your dependencies or in one of your base images that will allow remote code execution, data exfiltration, [or] whatever you definition of owned might be.”

Everything that’s convenient for you also becomes convenient for hackers.

This means you want to make sure your containers are as simple inside as possible, so if someone gets in them, they aren’t getting much and they aren’t able to jump from that container to others.

Davidson’s talk echoes back to Sounil Yu’s one on cloud native security which advocates your containers are safest when highly distributed, immutable and ephemeral.

Davidson dives into the practicality of that by recommending you use a distroless base image. You don’t usually want to have an overcomplicated build — you just need the basics that make your application run.

“For the most part, your workloads do not need that suite of of amazing options that Debian has baked in. They just act as basically an attack surface that when someone owns their cluster they can use the shell to ping around to curl all kinds of end points within your cluster and cause you a bunch of problems,” he said.

He says there are distroless are effective because they have a very tiny attack surface and no package manager.

Focusing on the ephemerality of containers, Davidson continued that you need to make sure your containers are really easy to build and deploy. It’s not just because it’s easier that way, but so much of the vulnerabilities are built into the dependencies. By building your containers more simply, it makes it easier to bump the dependencies, redeploy them and then allow your continuous integration/continuous deployment (CI/CD) platform to do the rest.

Davidson signaled another benefit of a CI/CD platform: signatures, also known as binary authorizations or signed containers. The platform acts as sort of the locked door to Wonderland only allowing trusted CI/CD robots with the signing key.

You can also leverage a trusted signature pipeline across dependency validation, vulnerability scanning and integration tests. The final output app container then has the signatures that “sort of guarantees that we have run all of these tests, we have run all these dependency validations and no major issues arose. We can then take your Kubernetes cluster and configure it such that it will only admit containers that have these four.”

Next Davidson talked through the basics of pod security. He starts by recommending you don’t use hostPath. He admits it’s convenient because it gives your container a directory into the node’s filesystem, but he calls it a Trojan Horse that you don’t know where it could be later and if other pods can access it. Perhaps a year from now other devs won’t know they shouldn’t use this folder.

He says go on and don’t use hostNetwork either, which he labels “super risky.” Everything that’s convenient for you also becomes convenient for hackers. Davidson says that localhost is treated like a trusted domain, which means API requests that come through it are also treated like a trusted domain. He recommends, if you are using it, you should find other networking alternatives and make sure to not include hostNetwork: true.

Davidson says next you need to be aware that every pod is bound to a Service Account (SA), and you need to know which one. If unspecified it’ll be the SA named “default.” Default becomes kind of your junk drawer that is really dangerous if an attacker breaks into it. He says you need to consciously bind a different SA name to your pod or, even better, put your pod in a different namespace, “because namespaces are a really great security isolation.” Or you can just turn off your SA mounting by setting AutoMountServiceAccountToken to “false.”

Kubernetes Cluster Security

For cluster security, Davidson says you need to keep your clusters up to date, making sure bugs and vulnerabilities are constantly being fixed. Of course this sounds simple, but some developers are reticent to, as he says, “rock the boat,” when things are working well, and updating things can be scary.

But, since 1.16.0, at time of publishing, there have been 191 bugfix pull requests into the release branch. The latest patch version is 1.16.14. So it may be a pain, but it’s essential, so Davidson says, at the very least, update your cluster to the latest patch version. And he promises it’s not that difficult.

Davidson says you must actually isolate your cluster from the internet and put it on a private network like a VPN. There should be no public IPs for any cluster virtual machines. He says you can still log in your developers and bots through the network. As for your users, give them access through an external load balancer or verse proxy that can forward traffic to nodes. And then if your cluster needs internet access, like to download images, use egress-only access that enables you do to things like manage allow/deny lists.

Davidson continues to recommend using Secrets. Secrets are valuable because they are stored in memory but are never stored on a disk to a node. Nodes cannot request secrets unless their pods are scheduled on them, and they are easy to set up with role-based access control (RBAC). Yes, Secrets are small, around a megabyte each, but they should be only used for things like access keys, passwords, and tokens. He promises Secrets are super easy to use and much more secure.

Note that Secrets are not secure by default, but hosted Kubernetes offerings like GKE do encrypt Kubernetes at rest.

Davidson explained, “The real benefit of Secrets is how the Kubernetes infrastructure passes them around. You can set really strong authorization policy on all them, and they are hard to get access to.”

He continued that the real benefit comes from writing a really good authorization policy against your Secrets.

Kubernetes User Security

You’ve taken all these steps to secure your container and its cluster, but now it’s time to let others — specified others — gain access to it. Some ways to do this is simple, like using RBAC, all tied to least-privileged roles needed to accomplish work. Most organizations are great at RBAC but then when people leave the company or switch roles, you’re left with a tangled web of bindings, which can lead to authorization decay, where, years later, ex-employees still have un-revoked super user access.

That’s why you should not just use RBAC but then organize those roles into groups. Groups are typically organized by subjects or greater function within the organization, like site reliability engineering, engineering, CI/CD, and security groups. He says these take time to identify and must always support the Principle of Least Privilege. Then you tie your roles to your group memberships. The groups are binding and permanent, while these memberships are time-bound and ephemeral, which means it’s easy to move and remove them.

Finally, Davidson offered the part he is most excited about: policy agents. Typically a policy agent is a Kubernetes AdmissionController which selectively admits or denies Kubernetes resource requests based on rules or policies. He analogized this to going to the airport. First, you have someone checking you have an ID and a ticket — valid credentials and permission to proceed, like with RBAC. Then you and your stuff goes through a scanner, which checks you are meeting certain policies, like the policy agent.

He says this allows all the best practices mentioned above to be enforced at runtime, as well as audit the existing resources within the cluster.

So there’s no possible way to guarantee security — frankly attackers move even faster than whole security teams. But if you follow the steps listed above, you’ve created a strong foundation cross-organizational container security.

KubeCon + CloudNativeCon is a sponsor of The New Stack.

Feature image by Hans Rohmann from Pixabay.

Group Created with Sketch.
THE NEW STACK UPDATE A newsletter digest of the week’s most important stories & analyses.