Modal Title

Kubernetes and CNI: What’s Next — Making It Easier to Write Networking Plugins

May 8th, 2018 10:32am by
Featued image for: Kubernetes and CNI: What’s Next — Making It Easier to Write Networking Plugins
Feature image via Pixabay.

Casey Callendrello, Senior Software Engineer, Red Hat
Casey Callendrello is a developer for Red Hat working on the container Networking Interface (CNI), the widely adopted container network interface for Kubernetes. He previously worked at Weebly, where he built scalable infrastructure, and Akamai, where he helped the company grow to hundreds of thousands of servers. In his spare time, he studies German and builds embedded devices to automate his everyday life. He thinks the world would be a better place we would all switch to IPv6. He lives in Berlin, Germany. He was grudgingly elected president of the Noisebridge hackerspace.

Linux containers have changed the way we think about application architecture and the speed at which we can deliver on business requirements. They provide consistency and portability across environments and allow developers to focus on application innovation rather than underlying execution details. One container by itself, however, is just a useful way to package an application; many containers, working together and running at scale, can transform an enterprise. That’s where Kubernetes comes in, providing the capabilities to deploy and orchestrate Linux containers at the volume needed to drive real business results and power innovation.

While containers provide the application packaging and Kubernetes delivers the ability weave large, complex applications from simpler containerized components, these two technologies by themselves lack a common way to communicate outside of their specific stack. But there is an answer to this challenge: the Container Networking Interface (CNI), which provides a standard way for networking vendors and projects to integrate with Kubernetes.

First proposed by CoreOS (now part of Red Hat) to define a common interface between network plug-ins and container execution, CNI is focused on the network connectivity and removing allocated resources when a container is deleted. CNI was released in 2016, and the Cloud Native Computing Foundation (CNCF) Technical Oversight Committee voted last May to accept CNI as a hosted project.

One of the most important things that CNI provides is choice: CNI has a thriving community of third-party networking solutions that plug into the Kubernetes container infrastructure, including Project Calico, a layer 3 virtual network, Contiv Networking, policy networking for various use cases, and many others. It’s important for administrators to know that they have these choices so they can pick the right plug-in for the workload.

Indeed, the world of networking is extremely diverse and just as complex. The value of Kubernetes is that it abstracts this complexity from the average developer and presents instead a clean plug-in interface. CNI, in turn, provides the plug-ins that support the addition and removal of container network interfaces to and from networks. 

The Here and Now

CNI, which is defined by a JSON schema, originally came out of the rkt container runtime engine, which was designed to have very specific and accessible points of integration. CNI was not originally developed for Kubernetes. In fact, it’s a multivendor system that is used in other container runtime systems, but it was adopted by the Kubernetes community because it provides a simple but powerful form of network standardization.

Case in point: According to GitHub, each CNI plugin is a simple executable that is invoked by the container management system. The plugin is responsible for inserting a network interface into the container network namespace and making any necessary changes on the host. As a part of this, plugins are expected to assign their own IP addresses or delegate assignment to a separate IPAM plugin.

What CNI plugins are asked to do is really quite simple on its face:

  • Add container to network
  • Delete container from network

This simplicity — along with being a vendor-neutral specification — has led to the wide adoption of CNI, even by container orchestrators outside of the Kubernetes ecosystem like Mesos and Cloud Foundry. Plugins that seek deeper integration with Kubernetes however, such as those implementing NetworkPolicy, will need to do so outside of the scope of CNI.

But there are some gotchas. For example, with some CNI plugins, packets are directly sent to the network, skipping the host’s routing tables and firewall rules on the way out. This increases performance but comes at a cost. Key Kubernetes features such as NetworkPolicy and Service IPs are usually implemented with the expectation that all container traffic passes through the host. Users deciding on which CNI plugin to use need to be aware of these tradeoffs.

Administrators often use Kubernetes features like DaemonSets to install and manage their plugins. This is particularly useful when the plugin includes a component that needs authenticated access and RBAC permissions to the Kubernetes API. When doing so, however, both plugin authors and administrators must consider disaster recovery. So before taking this action, administrators need to ask themselves: If the control plane is lost, will your installation have circular dependencies that keep you from bringing it back?

The Future Path

When it comes to talking to the Kubernetes API, writing a CNI plugin is about to become a lot easier. CNI spec v0.4.0 will include more dynamic information from the runtime, including bandwidth restrictions and IP ranges. This means that plugin authors should be able to bring up a container without depending on API accessibility. A best practice, as always, is to cache information locally where possible, reducing load on the API server.

But what else? In CNI v0.4.0, we’re planning to add the GET command, which has taken a lot of work to get right. The challenge is to define it in a way that doesn’t make any assumptions about how plugins are implemented. The spec shouldn’t hamper existing users or be overly complicated to implement. It will be worth it, though; this will allow for seamless restarts of the container runtime, making administrators jobs much easier. Having a large and vibrant ecosystem is a good thing, but it can make it challenging to gauge if a change in one place will negatively impact someone (or something) in another place.

And therein lies the rub as we move forward. How do we continue to improve on and expand CNI without compromising its simplicity? That’s one of the things I discussed in my recent session at KubeCon — watch to learn more:

Group Created with Sketch.
TNS owner Insight Partners is an investor in: Pragma.
THE NEW STACK UPDATE A newsletter digest of the week’s most important stories & analyses.