TNS
VOXPOP
How has the recent turmoil within the OpenAI offices changed your plans to use GPT in a business process or product in 2024?
Increased uncertainty means we are more likely to evaluate alternative AI chatbots and LLMs.
0%
No change in plans, though we will keep an eye on the situation.
0%
With Sam Altman back in charge, we are more likely to go all-in with GPT and LLMs.
0%
What recent turmoil?
0%
Security

Containers Break the Shared Responsibility Model Between Cloud Providers and Ops 

Jul 3rd, 2018 3:00am by
Featued image for: Containers Break the Shared Responsibility Model Between Cloud Providers and Ops 

Gadi Naor, CTO and Co-Founder of Alcide
Gadi Naor brings 15 years of experience in leading the development of cybersecurity products to his role as CTO and co-founder of Alcide. Gadi has blended his management and technological background in various positions. Gadi worked at CheckPoint where he served as business development manager and senior developer, leading the development of CheckPoint’s Firewall core security engine and VPN software. He then served as a senior software engineer at Altor Networks, a pioneer in virtualized data center security that was later acquired by Juniper Networks, where he continued to serve as a senior software engineer. Prior to co-founding Alcide, Gadi was the co-founder and CTO of Fitfully, a microservice-based system.

Last month, a critical vulnerability in the basic Linux network infrastructure was discovered by Felix Wilhelm from Google’s Security Team and disclosed by Red Hat product security.

The attack exploits a problem in the processing of Dynamic Host Configuration Protocol (DHCP) messages, which is the way machines automatically set up their network connectivity. A command injection vulnerability was found in a script included in the DHCP client (dhclient) packages, allowing an attacker to pose as a legitimate DHCP server, sending specially crafted packets containing malicious commands that a DHCP client may unsuspectingly execute. This vulnerability affects Red Hat commercial Enterprise Linux 6 and 7 and the various Linux derivative distros such as Fedora & CentOS.

Taking this vulnerability as an example, we can see how a traditional “shared responsibility” model of security between cloud providers and their customers becomes less effective for containerized workloads. As network plugins have become the standard way of providing networking between containers, cloud providers have not stepped up their own responsibility for securing containers, leaving security and operations teams struggling when patching becomes insufficient to secure their containerized applications.

In this new paradigm, security and ops teams must adopt new tools and tactics to ensure complete visibility into containerized environments.

The Shared Responsibility Model Explained

Cloud Providers offer a “shared responsibility” model to their customers. This model basically says: your cloud provider is responsible for the infrastructure and for managing the security of the cloud. You, as the customer, are responsible for securing everything that is running in the cloud. You have full ownership and control of all your data, applications and operating system, but this also means you are responsible for securing them.

Services like AWS Fargate and Azure ACI, expand the above model into the container world. With containers-as-a-service, the Container hosts, which are Kubernetes worker nodes, are your “Container-Hypervisor,” and the Container Network Interface is your network forwarding provider.

This brings us to a point where responsibility boundaries become blurry as the delineation between your environment and the providers’ begins to fall apart. Under a traditional shared responsibility model, cloud providers would be responsible for securing the networking between containers and other environments. However, as we see in the above example, this is no longer always the case – leaving security and ops teams no choice but to take on a larger share of the responsibility.

Container Network Interface and CVE-2018-1111

The Container Network Interface (CNI), a Cloud Native Computing Foundation project, consists of a specification and libraries for writing plugins to configure network interfaces in Linux containers. The framework concerns itself only with network connectivity of containers and is heavily used in containerized deployments. There are multiple CNI plugins supported by Kubernetes, Docker, and anything that uses Docker networking to orchestrate container networking.

In this rich ecosystem of network plugins controlling connectivity between workloads and the host network, operations and security teams may struggle to ensure that running workloads are not exploiting the network to move laterally, or east-west in other words, to the host itself from containers or even to neighbor containers hosts.

CVE-2018-1111 with its DHCP client vulnerability is yet another example of a potential container escape. It may let bad code in a container execute commands directly on a container-host. Protection against such attacks depends on the out of the box segmentation and isolation capabilities of the underlying container network interface plugin. Any local network service that may run on container hosts, such as DHCP, DNS, and NTP, are the usual suspects for network-based attacks that can facilitate east-west container escape and privilege escalation.

Obviously, Ops and DevOps must apply the available patches to all vulnerable hosts in their data centers and cloud operations to prevent attackers from taking over the compute infrastructure.

However, teams responsible for securing modern data centers should keep in mind that the practice of regularly applying security patches to the infrastructure is quickly losing effectiveness. It should be complemented with runtime protection of virtual workloads such as VMs, containers and even serverless functions attached to the network. This workload-level runtime protection must enforce policies to minimize unintended connectivity paths across the networking stack and isolate compromised virtual workloads.

In a perfect world, cloud providers themselves would be the ones to provide the majority of these capabilities. However, Ops and security teams can no longer be sure exactly where the shared responsibility begins and ends. The best way to achieve greater peace of mind is to invest in tools and procedures that ensure greater visibility into containerized environments and offer workload-level protection as an additional layer on top of cloud providers’ offerings.

Alcide sponsored this post.

Feature image via Pixabay.

Group Created with Sketch.
TNS owner Insight Partners is an investor in: Docker.
THE NEW STACK UPDATE A newsletter digest of the week’s most important stories & analyses.