TNS
VOXPOP
Will JavaScript type annotations kill TypeScript?
The creators of Svelte and Turbo 8 both dropped TS recently saying that "it's not worth it".
Yes: If JavaScript gets type annotations then there's no reason for TypeScript to exist.
0%
No: TypeScript remains the best language for structuring large enterprise applications.
0%
TBD: The existing user base and its corpensource owner means that TypeScript isn’t likely to reach EOL without a putting up a fight.
0%
I hope they both die. I mean, if you really need strong types in the browser then you could leverage WASM and use a real programming language.
0%
I don’t know and I don’t care.
0%
Cloud Native Ecosystem / Kubernetes / Networking

Install Calico to Enhance Kubernetes’ Built-in Networking Capability

How to install the Calico networking overlay onto a Kubernetes cluster.
Jul 5th, 2021 9:00am by
Featued image for: Install Calico to Enhance Kubernetes’ Built-in Networking Capability

Calico, from network software provider Tigera, is a third-party plugin for Kubernetes geared to make full network connectivity more flexible and easier. Out of the box, Kubernetes provides the NetworkPolicy API for managing network policies within the cluster. The problem many Kubernetes admins find (especially those new to the technology) is that network can quickly become a rather complicated mess of YAML configurations, where you must configure traffic ingress and egress properly, or communication between Kubernetes objects (such as pods and containers) can be difficult.

That’s where the likes of Calico come into play. Because not every Kubernetes network plugin supports NetworkPolicy API, it’s important that you select a plugin that will take care of your needs. For example, the most popular Kubernetes network plugin is Flannel, which cannot configure network policies. With Calico, you can significantly enhance the Kubernetes networking configuration.

Take, for instance, the feature limitations found in the default NetworkPolicy, which are:

  • Policies are limited to a single environment and are applied only to pods marked with labels.
  • You can only apply rules to pods, environments, or subnets.
  • Rules can only contain protocols, numerical ports, or named ports.

When you add the Calico plugin, the features are extended as such:

  • Policies can be applied to pods, containers, virtual machines, or interfaces.
  • Rules can contain a specific action (such as restriction, permission, or logging).
  • Rules can contain ports, port ranges, protocols, HTTP/ICMP attributes, IPs, subnets, or selectors for nodes (such as hosts or environments).
  • Traffic flow can be controlled via DNAT settings and policies.

Calico also supports multiple data planes, which allows you to choose the technology that best suits the needs of your project (including the pure Linux eBPF data plane). Other features include:

  • Highly performant.
  • Massive scalability.
  • Interoperability with current non-K8s workloads.
  • Full Kubernetes network policy support.
  • Very active development community.

To make Calico even more appealing, it’s available for use on all popular cloud platforms, such as Amazon Web Services, Microsoft  Azure, the Google Cloud Platform, IBM, Red Hat OpenShift and SUSE’s Rancher.

Or, if you prefer, you can deploy Calico on bare metal in your data center.

Let’s walk through the process of installing Calico on a small Kubernetes cluster. We’ll demonstrate on a cluster running on instances of Ubuntu Server 20.04, but the process should be the same, regardless of your platform. So long as you have access to kubectl, you should be good to go.

The first method will assume you already have your Kubernetes cluster up and running, while the second method starts on a bare-metal Ubuntu Server instance.

Installing Calico

First I will show you how to install Calico for a small number of nodes (50 or less). First, download the manifest with the command:

curl https://docs.projectcalico.org/manifests/calico-typha.yaml -o calico.yaml

Once the file download completes, apply it with the command:

kubectl apply -f calico.yaml

When the command completes, you should see that a number of new objects/services have been created, such as:

clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers created

clusterrole.rbac.authorization.k8s.io/calico-node created

clusterrolebinding.rbac.authorization.k8s.io/calico-node created

daemonset.apps/calico-node created

serviceaccount/calico-node created

deployment.apps/calico-kube-controllers created

serviceaccount/calico-kube-controllers created

poddisruptionbudget.policy/calico-kube-controllers created

If you need to deploy Calico to a cluster with more than 50 nodes, you need to first edit the YAML file. Open the file with the command:

nano calico.yaml

In that file, look for the line:


To configure that line, you need to set one replica for every 200 nodes. So if you have 600 nodes, you’d set it to:


One thing to keep in mind is that you should set no more than 20 replicas and (in production) you should use a minimum of three replicas.

The next step is to install the calicoctl command. Download the executable with the command:

curl -o calicoctl -O -L "https://github.com/projectcalico/calicoctl/releases/download/v3.19.1/calicoctl"

After the executable downloads to your system, move it into a directory in your path, such as /usr/local/bin/, with the command:

sudo mv calicoctl /usr/local/bin/

Next, give the file executable permissions with the command:

sudo chmod +x /usr/local/bin/calicoctl

Verify the installation by running the command:

calicoctl -h

You should see a listing of how the command is used.

Installing on bare metal

Let’s take a look at how to install Calico on a bare metal instance of Ubuntu Server 20.04. Here are the steps:

Step 1: Update apt and install the necessary dependencies with the commands:

sudo apt-get update

sudo apt-get install -y apt-transport-https ca-certificates curl

Step 2: Download the Google Cloud GPG key with the command:

sudo curl -fsSLo /usr/share/keyrings/kubernetes-archive-keyring.gpg https://packages.cloud.google.com/apt/doc/apt-key.gpg

Step 3: Add the Kubernetes repository with the command:

echo "deb [signed-by=/usr/share/keyrings/kubernetes-archive-keyring.gpg] https://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee /etc/apt/sources.list.d/kubernetes.list

Step 4: Update apt and install kubeadm/kubectl with the commands:

sudo apt-get update

sudo apt-get install -y kubelet kubeadm kubectl

sudo apt-mark hold kubelet kubeadm kubectl

Step 5: Install Docker with the command:

sudo apt-get install docker.io -y

Step 6: Add your user to the docker group with the command:

sudo usermod -aG docker $USER

Log out and log back in.

Step 6: Disable swap by opening the fstab file with the command:

sudo nano /etc/fstab

Look for the line that starts with:


Change that line to start with:


Save and close the file.

Issue the command:

sudo swapoff -a

Step 7: Initialize Kubernetes with the command:

sudo kubeadm init --pod-network-cidr=SERVER/16

Where SERVER is the IP address of the hosting server.

Step 8: Create the necessary directory and copy the configuration files with the commands:

mkdir -p $HOME/.kube

sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config

sudo chown $(id -u):$(id -g) $HOME/.kube/config

Step 9: Install the Tigera Calico operator with the command:

kubectl create -f https://docs.projectcalico.org/manifests/tigera-operator.yaml

Download the necessary custom Calico resources YAML with the command:

wget https://docs.projectcalico.org/manifests/custom-resources.yaml

Open that file and check for any necessary customizations you might want with the command:

nano custom-resources.yaml

After you’ve customized the YAML, save and close the file and apply it with the command:

kubectl create -f custom-resources.yaml

Wait a few minutes and confirm all of Calico pods are running with the command:

watch kubectl get pods -n calico-system

When all of the pods have a status of Running, you’ll need to remove the taints on the master with the command:

kubectl taint nodes --all node-role.kubernetes.io/master-

If you issue the command:

kubectl get nodes -o wide

You should see that your Kubernetes cluster is now up and running with Calico. At this point, you should check out the official Calico Networking documentation to learn how to make the most out of your new deployment.

Group Created with Sketch.
TNS owner Insight Partners is an investor in: Docker, Tigera.
THE NEW STACK UPDATE A newsletter digest of the week’s most important stories & analyses.