Container Security in Multitenant Environments

13 Jun 2018 12:09pm, by

In the race to make this weird, wild world of distributed, containerized applications compatible with the virtualized infrastructure upon which most enterprises depend, perhaps no project has made more progress than Kata Containers. The product of collaboration between the Hyper.sh project and Intel’s Clear Containers, Kata aims to pair individual containers with hypervisors, creating that direct link with the hardware that typifies the first-generation virtualization, and isolating host Linux kernels from one another.

Google’s recent gVisor project follows a similar path, creating a minimal Linux kernel for the container hosts that reduces the likelihood of exploit.

Some folks contend these architectures may render many of the more aggressive security systems being conceived for containerized environments unnecessary or redundant. But in a conversation for The New Stack Makers podcast, Aqua Security co-founder and chief technology officer Amir Jerbi told us he believes that even the mode of process isolation that gVisor and Kata introduce would carry with it into practice some security challenges.  Try orchestrating a microservices environment with isolated instances in a multitenant environment, he suggests, and see what happens.

“One of the challenges with containers is multitenancy,” said Jerbi. In a typical containerized environment, he said, you still have the theoretical feasibility for a container to exploit a host Linux kernel, and thereby leverage that exploit to impact any other containers sharing that host. Kata would eliminate that, he admitted, by trying “to add a layer that will deal with the shared kernel and multi-tenancy challenge. By creating a kernel-per-container, an exploit of the kernel will not impact other containers running on the same machine.”

But the larger issue of application security, he continued, presents challenges for anyone trying to minimize the application’s attack service. “It doesn’t need to be a kernel exploit, right? It can be a wrong application logic that would allow someone to get access to your container and to your data. If something like happens, this is where Aqua will take control and mitigate that risk.”

There’s a benefit in getting the infrastructure security problem taken care of, Jerbi concedes. But in a mode similar to how SDN separated the control plane from the user plane, container isolation separates the security issue into the infrastructure plane and the application plane. This means the issues of application behavior can now be addressed separately, but it also means they may need to be addressed urgently, as the question of how such isolated multi-tenant services will behave in production remains largely unexplored, let alone unresolved.

In This Edition

5:15: How do you take the evolution of those profiles into account so you can understand when a function looks normal and when it looks abnormal?
8:48: In an ever-fluctuating microservices environment, can Aqua Security still apply an ‘attack surface’ mentality?
11:29: Security within the ephemeral nature of containers
13:33: Integrating containers with hypervisors and Kubernetes — Does it make sense?
19:03: Shifting the responsibility of security to the center of organizations, rather than to developers
24:18: Organizational pipelines and the hiring of information security professionals.

Podcast Transcript

Scott Fulton: Hello, I'm Scott Fulton. This is The New Stack Makers, the people who make distributed application security work for you. This edition of Makers is book two in The New Stack's ongoing eBook series on the Kubernetes orchestrator. It's entitled Kubernetes Deployment & Security Patterns, and our sponsor for this edition is Aqua Security, maker of the Aqua Container Security platform for assessing and normalizing the behavior of distributed applications. Access Forrester Research's latest analyst report, Ten Basic Steps To Secure Software Containers, about the key behavioral differences between containers and virtual machines, by registering at Aqua Security's website now. It's AquaSec, A-Q-U-A-S-E-C.com.

Scott Fulton: You've heard me ask this question before, but now it's become even more critical. Do we know enough about the behavior of individual containers in a distributed environment such as Kubernetes to make any solid judgments about what is normal and what is not? If we did, we could train an analytics system to recognize when such an application was doing something out of the ordinary. So, we can decide when it's doing something to endanger security.

Scott Fulton: With a firewall, we can write rules that cut off access to a network resource until we've identified the source and the purpose of the request. In a distributed environment, where the hosted containers share the same Linux kernel already, we don't have that opportunity to cut off what's already been made readily available. We can't assume the same distrust by default state that we can with a web application.

Scott Fulton: Amir Jerbi is Aqua Security's co-founder and chief technology officer. He tells us that in practice, even in the most distributed of Kubernetes environments, these applications tend to fall into a fairly discreet pattern. It may not take AI and it certainly doesn't take telepathy for us to identify that pattern. But as containerized environments evolve toward multi-tenancy, and as data centers themselves take on new forms and architectures, can we trust those patterns to evolve predictably?

Scott Fulton: I spoke with Amir Jerbi just a few days ago. Amir Jerbi, the co-founder and CTO of Aqua Security. Amir, hello. How are you?

Amir Jerbi: Hi. Good morning. I'm good. I'm fine.

Scott Fulton: And it's late afternoon where you are, am I correct?

Scott Fulton: When The New Stack has a discussion that they want to conduct on the topic of container security, a lot of times they haul me out of mothballs to talk about this. So, a lot of our listeners have heard me discuss these things before, kind of the basic topics of what is container security, what is the evolution of container security, and, if we're not too cautious, I may talk this subject to death. But Aqua Security talks specifically about run time container security. They use that terminology, and I've seen it frequently from you and also on the website. And there are a number of components in the container space that are referred to as the "run time".

Exploring Run Time Security

Scott Fulton: So, when Aqua Security talks about run time security, what part of the platform are you referring to? Especially when it pertains to something like OpenShift.

Amir Jerbi: Yeah. So, when we look at the application, right? You have an application running on your OpenShift environment. And because of the fact that this is a thin application, right? It's a container, usually refactored into microservices, then you can actually identify exactly what this application is doing. You can profile the network activity. You can profile file access. You can profile processes running. And even the users that the application is impersonating to. So, when we talk about run time potential, we actually talk about looking on all of those vectors of activity, creating a profile of a normal activity, and then if we identify something abnormal, we can alert you. We can tell you, "Hey, your application is now accessing a network that it did not access before," or we can even prevent that from happening. So, we can block a process, block a file access, et cetera.

Scott Fulton: In terms of profiling the behavior of an application, especially with Kubernetes, I'm told that the profile of a microservices application is prone to change and evolve over time. How do you take the evolution of those profiles into account so that your system can understand when evolution looks normal and when it looks abnormal?

Amir Jerbi: So, when we look at containers, they have some very unique properties. They never change in run time, right? So, when we talk about containers, they are usually immutable, and if a change happens, it's based on a very well-defined process that starts by a developer that creates a new version of an image. So, let's say you have Version One deployed in your production environment. Now, you want to change something. The way to do that is to go to the developer, ask for a change. The developer will create a new call, will package it, will ship it to a registry and then production, and then you will have Version Two of that application, right?

Amir Jerbi: Now, they can co-exist together on the same cluster. You can do rolling upgrades, Version One into Version Two, doesn't matter. But the fact is that a change is a well-defined process. Now, if, at Aqua, when we profile one application, we know exactly what's the version of the application that we've profiled, and we search for abnormal behavior of that profile of that specific version. When we see that there's a new version, like I said, a new change that came from a well-defined process, we know how to re-profile, or get the application into a stage that we learn again. It's actually a quick learning this time, because we're already at a good baseline of the previous version. We learn the changes of the new version, and then we lock down. We apply the profile on the new version.

Amir Jerbi: So, when you think about it, because everything is a well-defined process, the unexpected doesn't happen. And if it happens, if your application suddenly behaves differently, it means that, you know, something is broken. It means that, probably, someone was able to break into your app, and now is doing something that wasn't part of the original behavior of that application.

Scott Fulton: Now, does the learning of the behavior, in this case, qualify as machine learning, as an artificial intelligence process? Or is there something that's a lot more explicitly logical, direct and automated about this?

Amir Jerbi: Out of ... We call it the profile. Out of creating a profile is to know when to stop recording, basically, because you can monitor the activity of the network files, processes, and you can do that for ... How long? For days, for months. Right? So, when do you start recording? When do you decide that, "Okay, I've learned enough, and now I have a good profile that covers all of the flows of my application?" That's the machine learning capabilities in our product. What we do, we class all of the different behaviors of the application until the point that we decide, "Okay, we've covered everything that's possible there. Everything that application is doing." This is the point that we decide, "Okay, let's stop recording and let's start enforcing the profile that we've recorded."

Scott Fulton: A lot of folks in the security space tend to use phrases that make them sound like the subjects of Robert Ludlum novels. One of them is Attack Surface. You could probably write a novel and publish it by that name, Attack Surface, and it might automatically land on the New York Times bestseller list. But I'm wondering, in a fluctuating, micro-services environment, I would think that it'd be difficult to isolate just what a surface would be.

Microservices and Security

Scott Fulton: Surfaces, to me, imply a type of solidity. Microservices, as it's been explained to me and demonstrated to me before, is a very fluid concept. Not just rapid scalability up and down, but rapid distribution and evolution of workflows. So, with respect to trying to secure a microservices application, can Aqua Security apply an attack surface mentality to that?

Amir Jerbi: Yes, it can. So, microservices ... Applications running inside of the container, or microservices, they're not different than regular applications with regard to their attack surface, right? So, you still have your vulnerabilities, right? You have a lot of open source code in your application, and some of this open source code is outdated, well known vulnerabilities. You still have your network security issues, right? Where your application is exposed, maybe, to the internet, maybe even configured without using any passwords, so anonymous identification is available, and you still have your bad application logic where your application is not checking for input validations, right? Application is not checking for SSN certificate.

Amir Jerbi: So, all of these also exist in containers, and in microservices. It's no different than regular applications. The only difference is that you have less code. This is a microservice. It's a small piece of your entire application, which means that there are less holes in it. Still, there are holes, but probably in the number of them there are less. And at Aqua, when we look at that, when we look at attack surface, we actually try to solve the same problems that the regular application has. All of those issues that I've mentioned.

Amir Jerbi: The difference is, because there are less holes, the network span is smaller. The application outlook is smaller. They call it surface model. There are less vulnerabilities. There is no full-blown operating system there. Users are not logging in to the microservices, so the attack surface, by definition, is smaller. So, what we see, we see actually, an opportunity because once you are able to remediate those concerns, block the attack surface for this small microservice, and you do it consistently on all of the microservices, then you actually blocked the attack surface for your entire application. Right?

Amir Jerbi: So, we see it as a huge opportunity to do security on smaller pieces, but the outcome of your entire application, the impact, is quite big.

Scott Fulton: This fascinates me, the idea that, essentially, even though you're talking about a component whose lifespan in a run time environment is ephemeral, it comes in and blinks out like a good idea in a presidential campaign, you have a situation where you can apply, essentially, the same logic as what you're saying. That you can look for behaviors that you know and even when something has a limited lifespan, you know what its behavior should be.

Amir Jerbi: Exactly. And the fact that containers are ephemeral ... So, containers have a lot of properties that allows you to have a much better security environment than you had before, you know, using the hosts. The fact that containers are ephemeral, what it means, it means that even though there are attacks and even though that someone will be able to get inside of your application, consistency will be an issue, right? Because the container might not be there in a few hours or maybe a few days. So, even if you got an attack, in a few days someone will refresh the container and the hacker will vanish. The code that caused the breach, or, the outcome of the breach, will no longer exist.

Amir Jerbi: So, there are a lot of properties that makes container more secure, that will allow you to have much more security than you had before, and that's something that security companies like Aqua actually leverage, right? When we see that the container was not refreshed for a long time, we can notify and tell you, "Hey, your container is running for too much time, and if it was breached, then maybe by refreshing it, you can remove the attack." So not only that when we look at the attack surface and when we look at the run time, not only that we can prevent, you know, those attacks from happening, but also some of the properties of the container itself make an attack ... An attack is much more complex to manage.

Scott Fulton: You've probably followed some of the developments in container architecture, especially those fairly recently moves by design groups, the Content Containers Coalition, for instance, who are working on a way to create container security by wrapping each container in a fairly small VM, enabling that VM to be tied back to the hypervisor the way first generation virtual machines were tied to hypervisors, and they're making a very minimal change to the Kubernetes run time, so that it knows how to effectively orchestrate these little bitty VMs. And when folks have talked to me about this architecture, they have said, by reintegrating container with the hypervisor, they are closing what they term as an inherent vulnerability, or, an inherent hole in the attack surface, for Kubernetes. I would think if there was such an inherent vulnerability, you at Aqua Security would have already seen it. So, are they talking sense?

Amir Jerbi: There's two issues. So, one of the challenges with containers is multi-tenancy. Right? If you want to run a boatload of different applications from different security levels, you have a challenge, because those applications, those containers, will run on a shared kernel. And if you have a problem, you have a kernel exploit, then basically what it means, it means that one container will be able to leverage this exploit and impact other tenants running on the same machine.

Amir Jerbi: If you compare that against VMs, you don't have this issue, because each VM has its own kernel, and an exploit in one kernel doesn't impact other VMs running on the same hypervisor. So, when we look at solutions like Kata Containers, and others, what they try to do, they try to add a layer that will deal with the shared kernel and multi-tenancy challenge by creating a kernel per container. Basically an exploit of the kernel will not impact other containers running on the same machine.

Amir Jerbi: So, when you think about it, it's part of the overall container isolation. It's something that we at Aqua, we look at it as something that is part of the infrastructure, right? The infrastructure should provide you with very good isolated environment. You know? There shouldn't be any leakage between containers. And that's something that we work closely with the community, now, so with solutions like Kata Containers, to ensure that this is indeed the case.

Amir Jerbi: On the other hand, you know, what it allows you to do, it allows you, now, to focus more on your application security. Now that you have a hypervisor, or, with Kata Containers, now that you have an isolated environment and then a dedicated kernel you can focus on the application security, making sure that the application you are running inside of the container will have a minimal attack surface, and if something breaks, it doesn't need to be a kernel exploit, right? It can be a wrong application logic that will allow someone to get access to your container and to your data. So, if something like that happens, this is where Aqua will take control and mitigate that risk.

Amir Jerbi: We actually see a good combination of very good isolation between containers, that you can achieve using gVisor or Kata Container or second profiles, there are many methods to achieve that level of isolation, and once you achieve that, you can focus on your application security.

Scott Fulton: So, you perceive a solution like Kata Containers as addressing an infrastructure issue, and that can only help things, but it doesn't really change the picture for what Aqua Security provides, except that perhaps it might enable you to focus a little more clearly on application security.

Amir Jerbi: Yes, and like I said, there are many solutions. Windows, for example, they have Hyper-V containers, which are pretty much the same solution. It's a kernel per container. So, when the environment, when the infrastructure will have more isolation between containers, it will actually allow better controls with Aqua to your application. Because then you achieve both isolation from your neighbors but also better security to the application running inside of your container. So, we actually look at it as a huge opportunity to do security better.

Scott Fulton: Let's change the focus a little bit, because we've been talking about machines, technology and very, very impersonal things. I like people. In organizations that are trying to adapt themselves for a new era of containerization, the rules are changing, and we are seeing much more of a DevOps centered organization.

Centering DevOps in Organisations

Scott Fulton: That said, there are still discussions in these organizations with regard to who ends up being ultimately responsible for security. Is it still the operator for whom the job of monitoring and maintaining the run time is still prevalent, or do we, to borrow a term you've used several times before, shift left a little bit, left being the left side of the DevOps argument, and endow the developer with more responsibility for security than he's had before?

Scott Fulton: In situations, the organizations that I've followed, I've always thought the developer should have had more responsibility from the beginning. That there really should not necessarily be a shift left. But if we achieve that, are we effectively shifting responsibility to the center of organizations where they belong, in this instance making a security center which both developers and operators contribute equally?

Amir Jerbi: Yeah. You know, when you look at organizations, right, you look at the number of developers and the number of information security professionals in each organization, you see that there is a problem of balance, right? Much more, several times more developers than there are information security people. On the other hand, DevOps and Agile software is becoming faster than it used to be. We see organizations that are deploying into production multiple times a day. So, if you multiply that by the number of developers, you get a huge bottleneck on the security teams, right?

Amir Jerbi: So, traditionally, security teams were the final gate before production, but now, if a small team has to check and approve the artifacts of a very large team, then of course, you will have delays, and you want to be able to deploy multiple times a day. So, with that said, with that in mind, there is a need to find new ways of doing security. And you mentioned the shift left. So, with the shift left approach, what organizations are doing, they are actually looking at the pipeline. The continuous integration, continuous delivery pipeline, where you have at the left side of it, you have developers building code, in the middle of it you have the shipment of the code, and on the right side, you have production running the code.

Amir Jerbi: So, when you look at this pipeline, you want to spread security across the pipeline, adding many gates as your software delivery is handled, and in this case, you lose little or some of the security checks. So that only if the artifact has a sufficient security level it will pass the gate into the next stage. Right?

Amir Jerbi: And in this way, what you can do, you can leverage. You can leverage your developers, you can play the security so it won't be just at the last mile, it will be divided across the entire process. We're not there yet, right? But it's in motion. We see companies moving to this approach. We see companies adopting the pipeline approach and adding those security gates, in addition to the regular gates of, you know, quality and performance inside these processes.

Scott Fulton: Do those gates need to be added at a different level of the observation of the program? In other words, a different automation level? Or, can those pipelines be integrated into the CI/CD pipeline frameworks that they already have?

Amir Jerbi: Yes. So, definitely people would like to add gates to existing pipeline, existing tools. They don't want to add or create some friction with new gates or new definitions. So, usually what we see is that security is integrated into the three steps that already exist today in the organization. So, in the development step, you have developers taking the code, building software. At this stage, usually, what's being done is things like static code analysis, running some unit tests. So, there are already some tests done at the development gate. And you can add security tests to these gates. So, I mentioned static code analysis but you can also add vulnerability scanning, making sure that the developers are not taking open source packages with vulnerabilities.

Amir Jerbi: The next gate is the shipment. So, your developer just shipped the software after it passed the initial gate, and after the shipment, the code goes into some sanity tests. Performance tests. Regression tests. Nightly tests. Recalculating cycles. What you can add now, you can add security tests that will actually profile the application, that will actually identify what the application is doing and try to map all of the activity of the network files, processes, et cetera, and verify that there is no new activity. Right? So, you can add a gate there.

Amir Jerbi: And, as you are running into production, usually you have your last mile checks, you have the test that the software is approved by all parties, you have a test that checks that the software doesn't have any non-issues, any high sensitivity bugs. So, at that stage you can also check that the software integrity wasn't compromised, and the exact software that was signed by the developer is what you are getting into your production environment.

Amir Jerbi: So, by adding additional security checks into your already existing gates, you can actually automate security across all of the existing processes. So, that's why I see this, instead of trying to add more friction, more steps, if it will integrate naturally in what developers are already doing today, then you can actually leverage security in a natural way, and you won't get any friction.

Scott Fulton: In the few minutes that we have left, it sounds to me like the pipeline you've just described involves everyone in an organization, but you've also said that organizations today tend to have many more developers on staff than they have information security professionals. If that's the case, if you're building an integrated pipeline, putting security into the main delivery pipeline rather than crafting it as an afterthought, then do security professionals even need to be hired by an organization? Conceivably, can excess info sec professionals be taken out of an organization?

Amir Jerbi: No, no, no, no. Not yet. No. So, of course, you need security professionals. You need people with specific security expertise to define what are the tests that are needed, to define those gates, to define what's acceptable and what's not acceptable. So, the way I see it, if you think how the supply chain, the CI/CD is built, the role of the security professional is to integrate the appropriate security automation across the pipeline, right? To make sure that there is a vulnerability scanning, to make sure that there is a profiling, to make sure that there is an integrity check, right?

Amir Jerbi: So, this all modeling and the design is the one of the security professionals. And then there is also the feedback aspect. So, there are cases where, you know, there will be an attack. Something will happen in your production environment. So, you do want your security team, your incidental response team, to do the first analysis of this attack, or to do this first analysis of the risk that was identified in your production environment. And then to do the delegation, right? To leverage developers where needed.

Amir Jerbi: So, if an attack is based on a vulnerability, so your risk is based on a vulnerability that was discovered in production, then your security team can do the assessment and can decide, "Okay, we need to fix that immediately," and delegate the work back to the developers to create a new version, to make sure that the version doesn't have vulnerabilities, and to push it to production.

Amir Jerbi: If the attack came from the network, then your security professionals can further tune the policies to ensure that this does not happen in the future. So, of course, we will still see security professionals as a key part of the overall security of the organization, but the role will change from the role of people that are actually doing the work to people that can delegate and can design the security procedures in the organization.

Scott Fulton: Where can listeners go to find out more about Aqua Security and perhaps see you and some of your colleagues on stage? Perhaps at a conference coming up?

Scott Fulton: And look for the man in the cool hat. Alex Williams will probably track you down and want to put a microphone in your face and ask you a lot more, especially about that demonstration of container vulnerability. Amir Jerbi with Aqua Security, thank you for joining us today.

Amir Jerbi: Thank you.

Scott Fulton: Much appreciated.

Amir Jerbi: It was a pleasure.

Scott Fulton: Amir Jerbi is co-founder and CTO of Aqua Security. He spoke with us from his company's headquarters in Tel Aviv. Our thanks to Amir and his staff for his support of our New Stack eBook, Kubernetes Deployment & Security Patterns. Aqua Security may be found online at AquaSec, A-Q-U-A-S-E-C.com. You can find more informative and inquisitive audio from The New Stack at TheNewStack.io/podcasts. You can rate and review us on iTunes, like us on YouTube and follow us on SoundCloud, but you'd tickle us to death, as my Aunt Sally would say, if you'd just come home to us every once in a while, at TheNewStack.io. Our podcast producer is Kiran Oliver, our managing editor is Joab Jackson and our editorial director is Libby Clark. The lady who keeps this machine running is Judy Williams, and the man who makes it all stack up is Alex Williams. For The New Stack, I'm Scott Fulton.

Aqua Security sponsored this podcast.

Feature image: A self-contained field gun of the British Royal Horse Artillery, circa 1918 — in the public domain.

This post is part of a larger story we're telling about the Kubernetes ecosystem.

Get the Full Story in the Ebook

Get the Full Story in the Ebook

View / Add Comments

Please stay on topic and be respectful of others. Review our Terms of Use.