How to Navigate Multiple Networks for Kubernetes Workloads

Relying on more than one network to manage your Kubernetes pods is usually no big deal. For webscale applications, the process usually involves sending traffic to multiple networks and that is the end of the story. But for network-intensive workloads, you might need more than one road to get to where you’re going.
Oftentimes performance networking applications require the separation of the control plane from the data plane. This type of architecture is often used to account for security considerations (or to not expose data to another network) and to account for data speeds between different networks. This is one of the reasons why we need multiple networks for Kubernetes workloads — because our workloads are networking applications themselves and need to find a way to implement this kind of architecture.
Call Multiple CNI Plugins in Kubernetes

In Kubernetes, we assume one network interface per pod. This particular interface is created by the CNI (container network interface), a plugin that you use for your pod-to-pod connectivity (such as flannel, Calico, Weave, etc). CNI is a CNCF standard that’s leveraged by Kubernetes for a number of reasons, including its simplicity and its ability to make networking setups extensible. This frees Kubernetes from having to be smart about all the possible networking scenarios and CNI makes it easy for you (or your vendors, or your favorite open source communities) to create CNI plugins. CNI opens that door for you and simplifies how you attach pods to your network (or other container orchestration systems, as CNI is platform-agnostic). As a developer, it’s easy to pick up the CNI spec.
But CNI itself isn’t limited to a single network interface per pod. Instead, the standard itself has long supported multiple interfaces. The community has been hard at work exposing this functionality in Kubernetes. One of the approaches that you can use to enable multiple networks now in Kubernetes is to use a “CNI meta-plugin.” A CNI meta-plugin is a CNI plugin that can also call other CNI plugins. I could just imagine Xzibit saying: “I hear you like CNI plugins, so, I put your CNI plugins in a CNI plugin.”
By using a CNI meta-plugin, you can call multiple CNI plugins. And each of those CNI plugins create an additional network interface in your pod. Imagine that you executed the ip
command to view the interfaces in a pod, such as:
1 |
kubectl exec -it mypod -- ip a |
Typically, you’d just see two interfaces, a loopback and “eth0” (which would be attached to your default cluster-wide pod-to-pod network via CNI).
When you issue the same command with a meta-plugin provisioned in your Kubernetes cluster you may see a number of other plugins listed there — of course, usually you’d still see “eth0” which is your default cluster-wise pod-to-pod connectivity, but in addition you may see a “net0” and a “net1” which may be plumbed by other CNI plugins, for example — it could use the reference CNI plugins like macvlan or ipvlan (among others!). One popular usage is to use the SR-IOV device plugin and the SR-IOV CNI plugin in order to make use of the capabilities of SR-IOV to accelerate data plane traffic.
Custom Resource Definitions for Multiple Networks
The Kubernetes Network Plumbing Working Group has been working to formalize a standardized CRD (custom resource definition) to define how to express your intent to attach multiple networks to your pods in Kubernetes. CRDs are a method to extend the Kubernetes API and provide a lingua franca for Kubernetes applications to query for data from the Kubernetes API. The group was formed at KubeCon + CloudNativeCon 2017 in North America and it has released a specification for that CRD. Having a specification for this helps to normalize the user experience for Kubernetes users, who may wish to change the technology under the hood, but still have the same knobs exposed.
It’s worth noting that the standard as written by the working group includes CNI meta-plugins, but isn’t strictly limited to meta-plugins — so other approaches can also implement the standard.
The Network Plumbing Working Group is indeed a community effort and is welcoming to all who wish to participate in the efforts, they’re carefully growing the specification to account for more situations in order to make useful all of the functionality that Kubernetes provides on your additionally attached networks.
Members of the Network Plumbing Working Group have also been working on Multus CNI as a reference implementation of the standardized CRD. The standardized CRD is called a “NetworkAttachmentDefinition,” and Multus allows users to specify additional networks using NetworkAttachmentDefinitions. Multus takes action (via CNI) whenever a pod is created or deleted and queries the Kubernetes API for the NetworkAttachmentDefinitions to figure out which plugins should be called in addition to the cluster-wide default network in order to create additional network interfaces in the pod for your workloads to leverage.
Naturally, as a hardcore networking person you might ask the question, “What about networking that doesn’t use traditional kernel-based interfaces like you’d see when you issue ip a
— like userspace networking?” The answer is that the Network Plumbing Working Group’s specification also takes those into account, too. And there’s even a Userspace CNI plugin available to help you leverage userspace networking technologies, too.
Other CNI meta-plugins exist, some of which take slightly different approaches and may exist on a spectrum of the strict definition of CNI meta-plugin, but all of which are worth your time to evaluate. These include DANM, Knitter, CNI Genie and if you’re a virtual machine user working with a Kubernetes platform, make sure to take a look at kuryr-kubernetes.
Want to learn more? Join me at my talk at KubeCon in San Diego and embark on a tour in attaching multiple network interfaces to pods.