TNS
VOXPOP
What news from AWS re:Invent last week will have the most impact on you?
Amazon Q, an AI chatbot for explaining how AWS works.
0%
Super-fast S3 Express storage.
0%
New Graviton 4 processor instances.
0%
Emily Freeman leaving AWS.
0%
I don't use AWS, so none of this will affect me.
0%
Containers / Security

The Road to Kata Containers 2.0

Over the past two years, the Kata Containers community has improved isolation in the container world, making virtualization more lightweight and container-friendly, albeit with some negative impact on overhead. The vision for the future of the Kata Containers project is to continue to refine sandboxing technologies in order to further isolate cloud native applications transparently and at a minimal cost.
Jul 16th, 2020 10:58am by
Featued image for: The Road to Kata Containers 2.0
Feature image by Sylvia Zhou via Unsplash.

The open source Kata Containers project, launched in late 2017, aims to unite the security advantages of virtual machines (VMs) with the speed and manageability of containers. What has the project achieved in the last two years, and what features comprise the roadmap for the next release? Let’s catch up with Kata Containers, beginning with a quick look back at how Kata Containers came to be…

How Kata Containers Came to Be

Horace Li
Horace Li is China Community Manager at the OpenStack Foundation, where he supports the growth of China’s OpenStack ecosystem and accelerates participation in Open Infrastructure projects, including Kata Containers. Before joining the OpenStack Foundation, Horace worked at the Intel Open Source Technology Center for 13 years as technical account manager, supporting engagement in open source community projects in China.

When Docker hit the scene and containers became the hot new thing (circa 2013*), developers throughout the world were enamored with the benefits containers could offer. It’s no wonder. Containers — a standard unit of software that packages up code and all its dependencies so the application runs quickly and reliably from one computing environment to another — are critical for developers who want to build, test and deploy software faster. Containers are lightweight, low overhead, can be scheduled and launched almost instantly, run anywhere, facilitate microservices, and offer scaled consumption of resources, to name just a few popular advantages.

Despite their many technological advancements, containers do have a disadvantage — a security weakness that arises from containers sharing access to the host kernel. Theoretically, if you have multiple containers on a single host and one of those containers is exploited by malicious code, all other containers on that host are vulnerable as well, due to the shared namespace. In this scenario, containers can pose a serious security threat to an organization’s entire cloud infrastructure. If you are a cloud provider, the threat is extended to the data and business of your cloud customers — a totally unacceptable prospect.

Fig 1: Traditional Containers; caption: Traditional Containers: Isolation by namespaces, cgroups with shared kernel.

For this reason, many operators running containers at scale “nest” those containers inside VMs, isolating them logically from other processes running on the same host. But running containers in VMs robs users of many of the speed and agility benefits of containers. Recognizing this problem, developers at two companies — Intel and (the now-defunct Chinese startup) Hyper.sh — began working on a solution separately and simultaneously. Both companies set out to find a way to secure containers without forcing them to carry all the baggage that comes along with traditional VMs. Or to put it another way, they set out to “retool” virtualization to fit container-native applications.

Engineers from the Intel Open Source Technology Center used Intel Virtualization Technology to enhance performance and security isolation in its Intel Clear Containers project.

At the same time, engineers at Hyper.sh launched the open source project runV using a similar strategy of placing containers in a secure “sandbox.” Hyper.sh emphasized a technology-agnostic approach by supporting many different CPU architectures and hypervisors.

In 2017, the two companies merged their complementary efforts to create the open source project, Kata Containers. By joining forces, Intel and Hyper.sh aimed to deliver a superior end-user experience in both performance and compatibility, unify the developer communities, and accelerate feature development to tackle future use cases. Kata Containers became the first project outside OpenStack to be supported by the OpenStack Foundation (OSF). The project made its public debut at KubeCon North America in December 2017, with the community touting “the speed and agility of containers with the security of VMs.”

Here’s the substance behind the catchphrase. With Kata Containers, each container or container pod is launched into a lightweight VM with its own unique kernel instance. Since each container/pod is now running in its own VM, malicious code can no longer exploit the shared kernel to access neighboring containers. Kata Containers also makes it possible for container-as-a-service (CaaS) providers to more securely offer containers running on bare metal since each container/pod is isolated by a lightweight VM. Kata Containers allows mutually untrusting tenants — or even production and pre-production (unproven) apps — to safely run in the same cluster, thanks to this hardware isolation.

Fig 2: Kata Containers; caption: Kata Containers: Each container or pod is more isolated in its own lightweight VM.

As a result, Kata Containers are as light and fast as containers and seamlessly integrate with the container ecosystem — including popular orchestration tools such as Docker and Kubernetes — while also delivering the security advantages of VMs.

Community Advancements

During Kata Containers’ first year, the community’s efforts were primarily devoted to merging the codes from Intel and Hyper.sh and to spreading the word at events throughout the world about the project’s unique approach to hardware-level isolation, a feature that other container runtimes lack. The fledgling community also invited a broader community of developers to take part in advancing the project.

Since its launch, the Kata Containers community has grown to include contributors and supporters from many influential companies, including 99Cloud, Alibaba, AMD, AWS, Baidu, Canonical, China Mobile, City Network, DellEMC, EasyStack, FiberHome, Google, Huawei, IBM, Microsoft, Mirantis, NetApp, Nvidia, PackageCloud, Packet, Red Hat, SUSE, Tencent, UnitedStack, Vexxhost and ZTE. With the support of this expanded community, the project has made steady progress.

Community achievements include:

  • Joining the Open Container Initiative (OCI). The Kata Containers community continues to work closely with the OCI and Kubernetes communities to ensure compatibility and regularly tests Kata Containers across AWS, Azure, GCP and OpenStack public cloud environments, as well as across all major Linux distributions.
  • Adding support for major architectures including AMD64, ARM, IBM p-series and IBM z-series in addition to x86_64.
  • Providing seamless integration with the upstream Kubernetes ecosystem. Kata Containers can now connect to almost all Kubernetes networks out of the box.
  • Removing unnecessary indirections in the stack. The community has eliminated the kata-proxy and, with help from Kubernetes SIG-Node and the containerd community, have introduced shim-v2, reducing the number of Kata Containers assistant processes.
  • Reducing consumption to improve speed. The community is working to accelerate booting, reduce memory consumption, and push toward its goal of creating an (almost) “zero overhead” sandboxing technology. To that end, it has added support for multiple hypervisors including QEMU, QEMU-lite, NEMU and AWS Firecracker; integrated with the containerd project; and contributed to the project rust-vmm. In 2019, the community introduced a new in-sandbox agent written in Rust, which significantly reduces anonymous pages. With these advancements, the community has made good headway in minimizing overhead; for example, introduction of Firecracker VMM reduced the memory overhead to the 10MB level, and merger of the the rust-agent reduced the overhead of the agent from the 10MB level to the 1MB level.
  • Accommodating virtualization for cloud native workloads. Virtualization technologies for cloud-native workloads are very different from those for virtual machines. To address this difference, the community has employed virtio-vsock and virtio-fs and soon will add the memory-scaling technology virtio-mem.

To read more about these achievements, see Xu Wang’s post on Medium, “The Two Years of Kata Containers.”

Proof in Production: Baidu AI Cloud runs on Kata Containers

Baidu, a dominant Chinese search engine operator, host of the largest Chinese website in the world, and a global leading AI company—is running Kata Containers in production at massive scale (more than 43k CPU cores!) in its Baidu AI Cloud, supporting its Cloud Function Computing, Baidu Container Instance and Baidu Edge Computing services.

Baidu AI Cloud is Baidu’s intelligent cloud-computing platform for enterprises and developers, dedicated to providing all-in-one artificial intelligence, big data and cloud computing services for enterprises across all industries. According to Synergy Research Group, Baidu AI Cloud ranks among the top four public clouds in the Asia Pacific region.

Baidu AI Cloud is a complex network with huge amounts of traffic and complicated deployment scenarios exemplified by peak traffic of a single cluster of 1 billion+ page views (PVs) per day and 50,000+ containers for a single tenant. Baidu chose to use Kata Containers after doing extensive research on secure container technologies and determining that Kata Containers is a highly secure and practical container technology.

Baidu recounts the reasons for their choice of Kata Containers in the white paper “The Application of Kata Containers in Baidu AI Cloud.” In this candid case study, Baidu has documented and shared its use cases for Kata Containers, the technical challenges encountered in applying the technology, and the innovative ways Baidu engineers addressed these issues.

Zhang Yu, Baidu Cloud senior architect and white paper author, writes:

[I]t was imperative for Baidu to figure out how to improve container isolation to protect customer workloads and data while leveraging the lightweight nature and agility of containers. … The isolation mode among virtual machines adopted by Kata Containers not only ensures a safe isolation of the container in a multitenant environment, but also helps to make the isolation of virtual machines invisible to applications and users. … As a secure container solution, Kata Containers play a vital role in the container services provided by Baidu by meeting diverse customer use cases through the support of multiple KVM based VMMs.

In its successful application for the Superuser Award, Baidu described the way Kata Containers has transformed its business:

In 2019, our Kata Containers based products are enjoying market success in areas of FaaS (Function as a Service), CaaS (Container as a Service) and edge computing. Baidu’s cloud function computing service (CFC) based on Kata Containers provided computing power for nearly 20,000 skills of over 3,000 developers to run cloud function computing for Baidu DuerOS (a conversational AI operating system with a “100 million-scale” installation base). Baidu Container Instance service (BCI) has built a multitenant-oriented serverless data processing platform for the internal big data business of Baidu’s big data division. The Baidu Edge Computing (BEC) node is open to all clients while keeping them separated from each other for security and ensuring high performance.

Presenting at the Open Infrastructure Summit Shanghai in November 2019, Yu reported that 17 important online businesses of Baidu already had been migrated to the Kata Containers platform. Yu explained that Kata Containers provides a virtual machine-like security mechanism at the container level, which gives its customers a great deal of confidence. When moving their business to a container environment, they have less concern.

Roadmap for Kata Containers 2.0

Over the past two years, the Kata Containers community has improved isolation in the container world, making virtualization more lightweight and container-friendly, albeit with some negative impact on overhead. The vision for the future of the Kata Containers project is to continue to refine sandboxing technologies in order to further isolate cloud native applications transparently and at a minimal cost.

More specifically, the goals for the 2.0 release of Kata Containers, projected for later this year, could be summarized as follows:

  • Introduce security improvements such as adjustments in architecture to better isolate the host from workloads, adding in-VM image handling to allow Kata Containers to be integrated into hard, multi-tenant environments. (Contributors from IBM have already started this effort.) Moving the image handling to the sandbox will prevent the host from accessing any container application data.
  • Introduce optimizations for reducing the footprint of running a Kata Container by rewriting key components in rust and adopting various other architecture improvements.
  • After adding support for cloud-hypervisor VMM at the end of 2019, continue adding additional features like device passthrough and cpu/memory hotplug.
  • Add support for a new memory scaling technology called virtio-mem, which could be a major improvement over classic memory balloon drivers. This also will provide better support for memory resource constraints placed on a container.
  • Continue driving the project to be the industry-standard container runtime for sandboxing workloads.

For a deeper dive into the challenges that the community hopes to address in the 2.0 release, please see Xu Wang’s articles on Medium: “Kata Containers: Virtualization for Cloud Native” and “The Blueprint of Kata 2.0.”

Learn More / Get Involved

Baidu is an excellent example of a company that has invested in open source projects and communities with great success. The Kata Containers community welcomes other individuals and organizations to do the same: play a proactive role in the project by contributing code, documentation and use cases.

To learn more and get involved, visit the Kata Containers community page.

Group Created with Sketch.
TNS owner Insight Partners is an investor in: Mirantis, Docker.
THE NEW STACK UPDATE A newsletter digest of the week’s most important stories & analyses.