Top Considerations When Selecting CNI Plugins for Kubernetes

Diamanti sponsored this post.

The era of cloud native applications has ushered in new ways of thinking about networking architecture. Kubernetes networking was designed to be a clean, backward-compatible model that eliminated the need to map container ports to host ports. To support this, Kubernetes introduced many basic constructs around networking — such as pod networks, services, cluster IPs, container ports, host ports, and node ports which abstract users from the underlying infrastructure.
Even though there are many constructs in Kubernetes that build the basics for networking, Kubernetes leaves many intentional gaps to make it infrastructure-agnostic. Most of these gaps in networking are filled by networking plugins, which interact with Kubernetes via the Container Network Interface (CNI).
Common Limitations of CNI Plugins
Kubernetes uses a plugin model for networking, using the CNI to manage network resources on a cluster. Most of the common CNI plugins utilize overlay networking, which creates a private layer 3 (L3) network internal to the cluster on top of the existing layer 2 (L2) network. With these CNI plugins, private networks are only accessible to the pods within the cluster. The process of moving packets between the nodes, or even outside the cluster, heavily relies on iptable rules and Network Address Translation (NAT) of the private and public IP addresses. Some examples of these CNI plugins are Open vSwitch (OVS), Calico, Flannel, Canal, and Weave.
Every network CNI plugin available for Kubernetes has its own pros and cons. Let’s explore some of the common limitations with CNI plugins:
Reliance on Software Defined Networking
SDN networking functions are delivered as software appliances, adding various layers of complexities — including additional iptables and NATing. SDN software consumes 15% to 20% of available host resources (CPU and memory), reducing efficiency and increasing the number of resources it takes to operate the actual applications.
Exposing the Application Outside the Cluster
As most networking solutions use L3 networking, the existence of the pod IP is within the cluster itself. Exposing these pods to the outside world remains a challenge. Kubernetes utilizes ServiceType “NodePort” and “LoadBalancers” to expose the applications.
ServiceType “NodePort” routes all the applications running on the node via the host network interface on a random unique port.
In a public cloud, the availability of cloud load balancers makes life a little easier as it automatically assigns a public IP to Kubernetes ServiceType “LoadBalancer”. However, this functionality is not readily available for on-premises clouds. Solutions like MetalLB can be used to solve this issue, but they come with their own limitations and challenges.
Routing all Traffic Through the Host Network
When using ServiceType “nodePort” or “LoadBalancer”, Kubernetes utilizes the host network interface to route all the traffic. This is not an ideal scenario in an enterprise environment, due to issues with security and performance.
Traffic Isolation
Most Kubernetes networking solutions use the same physical network (host network) interface for all kinds of traffic. That means control, pod and storage traffic share the same network plane/interface. This can be a security risk and it can also impact the performance of the Kubernetes control plane, as pod and storage traffic can easily consume the available bandwidth (or vice versa).
Load Imbalance
Most networking solutions rely on external load balancers, which is not an issue when pods are distributed across multiple nodes in a cluster. But multiple pods of the same backend service can also be running on the same node. This can cause load imbalance problems, as external load balancers can only load balance between the nodes and not pods.
Extra Hops
In L3 networking, external access is always done via exposing an interface or port on the node itself. In that scenario, external to pod communication can end up having extra hops if requests go through the wrong node, impacting performance and latency.
Multihomed Networks
In many cases, the application might need multiple interfaces for the pod network so that it can connect to different isolated networks/subnets. Most CNI plugins currently lack support for multiple interfaces.
Static Endpoint Provisioning
The Pod IP in Kubernetes is dynamic and changes whenever a pod restarts. Most of the CNI plugins do not have support to assign a static endpoint or IP for the pods. This means pods can only be exposed via services, which may not be ideal for certain types of deployments.
Noisy Neighbors and Performance SLAs
With virtual environments running multiple applications on the same node, the traffic from every application flows through the same network pipe. If one application is misbehaving, it can impact the performance of other applications. Most CNI plugins do not have any support for providing networking performance guarantees at an application level.
Multizone Support
High availability is critical to any organization and is becoming a requirement in production Kubernetes deployments. It’s important to have networking support for multizone clusters, which distributes an environment across different fault domains.
No Separation For Storage Traffic
Most CNI plugins cannot differentiate between storage traffic and regular traffic. They use the same shared plane/interface for even storage data movement, which causes networking and storage traffic — and in some cases even control traffic — to compete with each other. This impacts performance as well as security.
A Different Approach to Networking
Diamanti addresses most of the shortcomings found in common CNI plugins, with its unique network architecture. Diamanti’s data plane solution for Kubernetes comes with built-in support for L2 networking, with hardware offloaded smart NICs. This allows real L2 MAC addresses to be assigned to each pod on externally routable networks, making networking much easier. It also supports L3 overlay networking using OVS, traffic isolation, VLAN/VXLAN segmentation, multi-homed networking, static endpoint provisioning, network-aware scheduling, guaranteed SLAs, and many more unique features. You can see more details about the Diamanti networking architecture on the Diamanti website.
The networking stack in Kubernetes is one of the most important architecture decisions for an enterprise production deployment. When selecting a CNI plugin for your infrastructure you need to be careful of its limitations and decide what’s best for you.
Feature image via Pixabay.
At this time, The New Stack does not allow comments directly on this website. We invite all readers who wish to discuss a story to visit us on Twitter or Facebook. We also welcome your news tips and feedback via email: feedback@thenewstack.io.