Make the Most from Kubernetes’ New Network Policy API

While the American election season has focused much of the world’s attention directed to anything but a substantial discussion of policy, the Kubernetes networking community had other ideas. Back in July, Kubernetes 1.3 came out with support for network policies, making it the first of the container orchestrators to have such a capability built-in.
Currently, at beta level, the Kubernetes network policy API gives developers the ability to define rules that determine which pods (the Kubernetes term for a workload made up of one or more containers) can connect to which other pods. Think of it as a dynamic firewall around each microservice (but we’re not going to make Mexico pay for it).
You might think that, with several competing vendors being involved in the definition of the API, the work would have been bogged down by politics. Unlike in Washington, DC, however, the networking special interest group was a model of cooperation, focused on technical merits and clear end-user use cases. In fact, some of the vendors who worked on the specification are coming together for a panel at the upcoming CloudNativeCon — so think of this as a sneak peak into what might be discussed there.
Why Policy?
The first use case that many people come across is segmentation of the network into “tiers” (usually three) — for example, being able to specify that a back-end database tier can only be accessed from application tier pods, not directly from the front-end tier.
This, however, only scratches the surface of the network policy API. Its full power comes into play as developers embrace cloud-native application architectures, where dozens or hundreds of microservices communicate with one another, in a way that doesn’t neatly map to the traditional “three-tier” model.
In this case, the connectivity matrix is much more complex, and also dynamically changing as pods are created and destroyed. Network policy provides a way for developers to describe these more intricate relationships, in a language that is natural to them — but maps to powerful infrastructure-layer enforcement.
Network Policy Deconstructed
The Network Policy API is deceptively simple because it leverages existing core Kubernetes concepts such as labels and selectors. The result, however, is a powerful, declarative API: the developer declares her intent, and it is the underlying system’s job to work out how to translate that into networking primitives that achieve the desired result.
Labels in Kubernetes are arbitrary key/value pairs assigned to pods. For example, a developer might assign “role: db” to indicate the pod’s functionality, or “location: us-west” to indicate it is in a particular geographic location. In most cases (as, for example, with pods identified by a Replication Controller) such labels will already exist and so can be easily used by network policy.
Selectors are expressions that combine labels to define a subset of all the pods in a cluster. They are the fundamental mechanism for describing groups of pods in Kubernetes. For example, matching both “role: db” and “location: us-west” identifies all the database pods in the us-west location.
The network policy definition includes a pod selector and the rules that apply to all the pods that meet the selector criteria. These rules (which currently are limited to ingress — i.e. what external resource can establish inbound connections) can refer to labels or specific IP address ranges, and also restrict communication to specific ports.
For example, the following policy says that all database pods can accept inbound TCP connections on port 6379 from any pod with the role “frontend”:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 |
apiVersion: extensions/v1beta1 kind: NetworkPolicy metadata: name: test-network-policy spec: podSelector: matchLabels: role: db ingress: -from: -podSelector: matchLabels: role: frontend ports: -protocol: tcp port: 6379 |
To appreciate the power of this API, consider a large cluster with many database pods and many front-end pods. If this policy has been applied, then whenever a developer creates a new pod with the label “role: frontend,” all the database pods immediately allow access from it. The developer deploying the front-end pod didn’t have to think about firewall rules — the policy automatically applied them. And, of course, this also applies to pods that are automatically created as a result of autoscaling.
Further, if the “role:” label is deleted from a pod, access to the database pods is “automagically” removed, immediately. And if the policy is updated, the firewall rules applied to all the database pods in the cluster are correspondingly updated.
Network Policy vs. CNI Plugin: What’s the Difference?
If you’re following along with the Kubernetes documentation, you’ll see this slightly worrying disclaimer:
“POSTing this to the API server will have no effect unless your chosen networking solution supports network policy.”
What’s that all about? Well, as an open system, Kubernetes embraces pluggability for many of its key components. In particular, there are several different options for networking, thanks to the Container Network Interface (CNI).
Somewhat confusingly, network policy is not part of the CNI standard. Therefore, you need to make sure that whatever implementation you are using for network policy will also work with your chosen networking solution.
In some cases, you will have to use the same controller for both networking and network policy. Others can be combined — for example, you can deploy just the network policy component of Project Calico with a number of different networking plugins, including flannel. There is even a project, Canal, dedicated to making this particular combination even easier to deploy.
Network Policy: Part of the Cloud Native Security Picture
There has been a lot of talk about the security implications of the evolution to cloud-native application architectures. Kubernetes’ Network Policy API, and its implementations such as Project Calico, will be a critical element in addressing these concerns and enabling enterprises to meet strict compliance requirements.
Tigera is a platinum sponsor at KubeCon (Nov 8-9, 2016). We look forward to seeing you there to chat more about your networking experiences and concerns.
Feature image via Pixabay.