Similar to container-native storage, the container-native network abstracts the physical network infrastructure to expose a flat network to containers. It is tightly integrated with Kubernetes to tackle the challenges involved in pod-to-pod, node-to-node, pod-to-service and external communication.
Kubernetes can support a host of plugins based on the Container Network Interface (CNI) specification, which defines the network connectivity of containers and deals with the network resources when the container is deleted. The CNI project is one of the incubating projects of the Cloud Native Computing Foundation.
Container-native networks go beyond basic connectivity. They provide dynamic enforcement of network security rules. Through a predefined policy, it is possible to configure fine-grained control over communications between containers, pods and nodes.
Choosing the right networking stack is critical to maintain and secure the CaaS platform. Customers can select the stack from open source projects including Cilium, Contiv, Flannel, Project Calico, Tungsten Fabric and Weave Net. On the commercial side, Tigera offers Calico Enterprise and an enterprise subscription of Weave Net can be purchased by contacting Weaveworks.
Managed CaaS offerings from public cloud vendors come with tight integration of the existing virtual networking stack. For example, Amazon Web Services has a CNI plugin for Amazon Elastic Kubernetes Service (EKS) based on Amazon Virtual Private Cloud (VPC), while Microsoft has built Azure Virtual Network Container Networking Interface (CNI) for Azure Kubernetes Service (AKS).
Table: Container-Native Networks
|Weave Net (contact Weaveworks for Enterprise subscription)||Weaveworks|
|Open Source Projects||Project||CNCF Status|
|Project Calico||Not Submitted|
|Tungsten Fabric||Not Submitted|
|Weave Net||Not Submitted|
Data from the 2019 CNCF survey provides further insight into cloud native networking.
Networking is a challenge that has declined over the years, although Kubernetes users continued to assess ingress providers. In fact, while a Kubernetes user had an average of 1.5 ingress providers, the 28% of respondents who cited networking as a challenge had on average 3 ingress providers. NGNIX was in 66% of Kubernetes stacks — but by itself, it wasn’t able to address the needs of all users. Adoption of HAProxy and Envoy was lower, but rose among those with networking challenges. This suggests that newer market entrants were adopted to address problems. Looking forward, expect offering to differentiate themselves based on the protocols they support and whether or not they include an API gateway.
Amazon Web Services and the Cloud Native Computing Foundation are sponsors of The New Stack.