Virtual networking software Project Calico brings network policies to the Kubernetes open source container orchestration software. While Kubernetes has extensive support for Role-Based Access Control (RBAC), the default networking stack available in the upstream Kubernetes distribution doesn’t support fine-grained network policies. Project Calico provides fine-grain control by allowing and denying the traffic to Kubernetes workloads.
By configuring Calico on Kubernetes, we can configure network policies that allow or restrict traffic to Pods. Similar to a firewall, Pods can be configured for both ingress and egress traffic rules.
In this tutorial, we will explore the basics of Project Calico by deploying an application on Google Kubernetes Engine (GKE). Unlike other managed Kubernetes services, GKE comes with an integrated Calico stack that can be enabled during the cluster creation. It is also possible to configure Calico on an existing, running GKE cluster.
Start by launching a standard GKE cluster with network policies enabled. This can be done by clicking the Enable network policy checkbox available under Availability, networking, security, and additional features section.
After the cluster is up and running, we can check for Calico Pods deployed as a part of Daemonset in the kube-system namespace. Let’s download the calicoctl, Calico’s CLI to explore the environment further. We need to point calicoctl to etcd endpoints of GKE cluster. This can be done with the below settings:
Now, let’s go ahead and deploy one of the samples provided by Project Calico. Run the below commands to deploy the application. You can download the YAML files from Project Calico’s documentation site. These YAML files configured and deployed multiple Kubernetes resources. Let’s explore each of them.
First, there are three Namespaces – stars, client, and management-ui. The stars Namespace runs the frontend and backend of the application Pods while the client Namespace has the client Pod. The management-ui namespace has an application that shows the visual representation of the deployment. The stars namespace two Pods and two Services associated with the frontend and backend of the application. The client namespace has the client Pod that talks to the frontend and backend Pods running in the stars namespace.The management-ui namespace has one Pod and a Service that runs the user interface to visually show the deployment.
We can access the UI through the NodePort or the proxy.
Each dot shown in the UI represents a microservice associated with frontend (F), backend (B), and client (C). We can see that the traffic is flowing across all the microservices.
We will now apply a deny policy that blocks access to all the Pods. Running the below commands enforces the policy to both stars and client Namespaces. When you refresh the management UI, the browser window is blank. This is because we haven’t explicitly allowed the service to access the Pods running in the stars and client namespaces.
We will apply the policy to allow the UI to access the Pods.Refresh the browser to access the UI.
Though the UI is able to access the Pods, they are still not communicating with each other. Let’s change this by allowing the traffic to flow among the frontend, backend, and the client. Refresh the browser to check if the traffic is flowing.
Notice that the most recent policy configured the traffic to flow from the client to backend only via the frontend. The client cannot talk to the backend directly.
Janakiram MSV’s Webinar series, “Machine Intelligence and Modern Infrastructure (MI2)” offers informative and insightful sessions covering cutting-edge technologies. Sign up for the upcoming MI2 webinar at http://mi2.live.