Modal Title
Containers / Kubernetes / Open Source

Open Source Is the Gravitational Center of Container Innovation

Sponsored post: Container-based apps are as secure and scalable as the runtimes and networks they use. As cloud native apps grow, runtimes and networks need to mature too.
Aug 14th, 2020 4:00am by
Featued image for: Open Source Is the Gravitational Center of Container Innovation

KubeCon + CloudNativeCon sponsored this post, in anticipation of the virtual KubeCon + CloudNativeCon EU, Aug. 17-20.

Craig Peters
Craig is a Principal Program Manager on the Container Compute team at Azure, focused on container infrastructure projects. Craig is active in many Kubernetes Special Interest Groups and is contributing to Windows nodes in Kubernetes.

Day after day, our efforts can fade into the background in our fast-paced world. This opportunity to celebrate what the open source communities we participate in have accomplished in just the past few months has been a delight. Open source helps us solve the most crucial cloud application challenges in the most effective way, with container abstractions enabling measurable improvements in partnership with the entire community.

Advantages abound for engineering (quality practice, career growth, retention, scale), for the business (good will, competing on the right basis, recruiting and retention), and for users (transparency, quality, innovation, new standards, freedom from lock-in). Let’s look at a few of these projects and consider how users, developers, and operators can benefit from the focus on user needs for security and scale.

Laying a Solid Foundation for Containers and Networks

Container-based apps are as secure and scalable as the runtimes and networks they use. As cloud native applications grow, the runtimes and networks need to mature too.

Containerd (a graduated project from the Cloud Native Computing Foundation) and moby enable users to securely run containers on every platform — from public clouds to the tiniest devices in the field of  Internet of Things (IoT). Containerd 1.4 will release shortly, with significant advances in security benefiting these platforms. In containerd, CGroups v2 and namespace support create a runtime environment in which processes will no longer know that they are running in a container, thus reducing the likelihood of escalation vulnerabilities. If your application uses Kubernetes, the Kubernetes CRI integration into containerd will enable use of CGroups v2 in Kubernetes. We’ve also been working with containerd and moby to ensure that they run on multiple CPU architectures, and on both Linux and Windows. With this flexibility it becomes critical to be able to direct moby, which we’re currently integrating with containerd, to run for a specified platform. As you can see, a major focus has been on hardening the infrastructure for highly secure and multi-architecture applications.

Limited IPv4 address spaces pose another challenge that constrains cloud native application growth. With the community innovations in dual stack Kubernetes networking, applications can now scale beyond the limits of the IPv4 address space. We are working closely with the rest of the open source community to enable pods and services to have IPv6 addresses. Dual-stack support is on track to move to beta this year in Kubernetes 1.20.

Securing Data and Cluster Operations

Infrastructure has evolved to enable new models for security and operations. The open source projects need to keep up.

Applications can leverage the latest in hardware security, by running in isolated enclaves when needed. Cloud providers provide supporting infrastructure. For example, Azure Confidential Compute has rolled out specialized hardware and APIs that allow applications to be protected in the processor, building on the existing at-rest and in-flight protection of data. Kubernetes users on Azure can use an extensible and transparent mechanism for proving the technology through a reference implementation with aks-engine and associated tests. For more on confidential computing for Kubernetes, check out “Bringing confidential computing to Kubernetes” by Lachlan Evenson.

Cluster API (a CNCF Special Interest Group [SIG] Cluster Lifecycle subproject) addresses operator challenges that arise when running more than just a few clusters. Operators need to ensure that clusters are patched and upgraded appropriately, Role-Based Access Control (RBAC) is configured in the standard way, and infrastructure capabilities like dedicated hosts, key management and identity are all used correctly. Cluster API now allows operators to enable MachinePools, so that all providers can expose the power of the infrastructure VM groups. To get started with Cluster API, check out the Cluster API quick start tutorial.

Managing Compliance and Secrets

Developers should not have to manually figure out whether their applications comply with ever-evolving organizational policies. Nor should they need to figure how to manage secrets on their own.

With Open Policy Agent (OPA) Gatekeeper, developers benefit from auditing and enforcement of policies for applications in a cloud native way. OPA (a CNCF incubating project) allows operators to define standard policies. Gatekeeper implements the policies in Kubernetes clusters, so that developers can run their apps without fear of unknowingly violating those policies. This unlocks the velocity of new feature development, by driving down the time to do security audits before apps move to production — making developers, operators and the business happier.

The Secrets Store CSI Provider (sponsored by the CNCF SIG Authorization group) project makes it so that secrets can be managed where they belong — in secret stores — but still be exposed in Kubernetes to the relevant applications in the ways they need. Initially we worked with HashiCorp to include support for their Vault in addition to the Azure Key Vault provider. Because operators are taking advantage of this new capability, additional providers are under development — Amazon Web Services for example.

Building and Managing Distributed Applications

Cloud native applications require new ways of looking at how we build and manage distributed applications. Open Service Mesh (proposed for donation to the CNCF), Helm (a CNCF graduated project), and CNAB (a Joint Development Foundation project) are key projects in which we are investing in this area.

With many options in the service mesh realm, platform operators can choose the mesh that meets their needs. Open Service Mesh is our recently announced open source offering, layering a simple open source service mesh over the industry-standard Envoy data plane. We implement the Service Mesh Interface specification and welcome collaboration on this new project.

Packaging, deploying and sharing applications with Helm allows for scalable operations on Kubernetes. We continue to invest in the Helm project, adding features and emphasizing security in the completely reimagined version three. To learn more about Helm 3 and how you can migrate from Helm 2 to Helm 3, check out here.

When packaging moves past singular applications and includes infrastructure, Cloud Native Application Bundles (CNAB) allows the end user to define and provision the exact stacks needed for their applications. Using the popular Visual Studio Code development environment with the K8s extension, infrastructure development can be streamlined and tested before a container moves past the developer’s environment. To find out how you can take advantage of CNAB, visit and check out the Visual Studio Code tools for Kubernetes tutorial.

Windows Moving Towards Parity with Linux in Kubernetes

Many enterprise applications run on Windows and are looking to standardize on Kubernetes to modernize their applications. Over the past couple of years, we have reached across operating systems in order to bring enterprise Kubernetes to the Windows community.

First, we have expanded the scope of Kubernetes dramatically to bring modern container orchestration to Windows. Recent work has brought containerd to Windows, opening improved resource management by utilizing Windows Host Container Service (HCS) v2. Second, applications that utilize storage heavily now can also gain from the newly robust support for CSI in Windows through the CSI Proxy. Next, developers can now use EndpointSlices to track network endpoints within a Kubernetes cluster. Learn more about running Windows containers on Kubernetes on the Kubernetes blog and on this Azure podcast.

Join us

Much of this tech is part of the CNCF and other Linux Foundation foundations, and we at Microsoft are delighted to be members of this vibrant community. Our team’s focus spans operators, developers, admins, automated tooling, business owners, and everyone else building a better experience.

Join us celebrating and innovating in the open source community!

To learn more about Kubernetes and other cloud native technologies, consider coming to KubeCon + CloudNativeCon EU, Aug. 17-20, virtually.

Amazon Web Services and the Cloud Native Computing Foundation is a sponsor of The New Stack.

Feature image via Pixabay.

At this time, The New Stack does not allow comments directly on this website. We invite all readers who wish to discuss a story to visit us on Twitter or Facebook. We also welcome your news tips and feedback via email:

Group Created with Sketch.
THE NEW STACK UPDATE A newsletter digest of the week’s most important stories & analyses.