Tutorial: Deploy the Nvidia GPU Operator on Kubernetes Based on containerd Runtime

This tutorial will explore the steps to install Nvidia GPU Operator on a Kubernetes cluster with GPU hosts based on the containerd runtime instead of Docker Engine.
In a typical GPU-based Kubernetes installation, each node needs to be configured with the correct version of Nvidia graphics driver, CUDA runtime, and cuDNN libraries followed by a container runtime such as Docker Engine, containerd, podman, or CRI-O. Then, the Nvidia Container Toolkit is deployed to provide GPU access to the containerized applications. Finally, Kubernetes is installed, which will interact with the chosen container runtime to manage the lifecycle of workloads.
Nvidia GPU Operator dramatically simplifies the process without installing the drivers, CUDA runtime, cuDNN libraries, or the Container Toolkit. It can be installed on any Kubernetes cluster that meets specific hardware and software requirements.
Below are the steps to install containerd, Kubernetes, and Nvidia GPU Operator. Towards the end of the installation, we will test the GPU access by running the popular nvidia-smi
command within the pod.
Environment
Operating system: Ubuntu 18.04 LTS Server
GPU: Nvidia GeForce RTX 3090
CPU: AMD Ryzen ThreadRipper 3990X
RAM: 128GB
HDD: 4TB NVMe SSD
Step 1: Install Containerd Runtime
Load the required modules and ensure they are persisted during reboots.
1 2 3 |
sudo modprobe overlay sudo modprobe br_netfilter cat < |
1 |
cat < |
Load the sysctl parameters without rebooting the system.
1 |
sudo sysctl --system |
Finally, install the containerd runtime.
1 2 |
sudo apt-get update sudo apt-get install -y containerd |
Let’s create the default containerd configuration file.
1 2 |
sudo mkdir -p /etc/containerd sudo containerd config default | sudo tee /etc/containerd/config.toml |
Set the cgroup driver for runc to systemd, which is required for the kubelet.
Within the [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc]
section, add the following lines:
1 2 |
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options] SystemdCgroup = true |
Your config.toml
should look like this:
Restart containerd with the new configuration.
1 |
sudo systemctl restart containerd |
Check the status of containerd runtime.
1 |
systemctl status containerd |
Step 2: Install Kubernetes 1.21
Start by disabling swap memory.
1 2 |
sudo swapoff -a sudo sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab |
Install the required tools.
1 2 3 4 5 |
sudo apt-get update sudo apt-get install -y apt-transport-https curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add - echo "deb http://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee -a /etc/apt/sources.list.d/kubernetes.list sudo apt-get update sudo apt install -y kubeadm kubelet kubernetes-cni |
Let’s initialize the control plane.
1 |
sudo kubeadm init --pod-network-cidr=10.244.0.0/16 --apiserver-advertise-address=10.0.0.54 |
Make sure you replace the IP address of 10.0.0.54 with the appropriate address of your host.
It’s time to configure kubectl
CLI.
1 2 3 4 5 |
mkdir $HOME/.kube sudo cp /etc/kubernetes/admin.conf $HOME/.kube/ sudo chown $(id -u):$(id -g) $HOME/.kube/admin.conf export KUBECONFIG=$HOME/.kube/admin.conf echo "export KUBECONFIG=$HOME/.kube/admin.conf" | tee -a ~/.bashrc |
Before we can access the cluster, we need to install the CNI addon. For this tutorial, we are using Weave Net from Weave Works.
1 |
kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\n')" |
Since we only have one node, let’s remove the taint to enable scheduling.
1 |
kubectl taint nodes --all node-role.kubernetes.io/master- |
Finally, check the status of the cluster.
1 |
kubectl get nodes |
Step 3 – Install Nvidia GPU Operator
Start by installing the binary of Helm3.
1 2 3 |
curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/master/scripts/get-helm-3 chmod 700 get_helm.sh ./get_helm.sh |
Add the Nvidia Helm Repository.
1 2 |
helm repo add nvidia https://nvidia.github.io/gpu-operator helm repo update |
Since we are using the containerd runtime, let’s set that as the default.
1 2 3 |
helm install --wait --generate-name \ nvidia/gpu-operator \ --set operator.defaultRuntime=containerd |
Within a few minutes, you should see the pods in the gpu-operator-resources
namespace running.
1 |
kubectl get pods -n gpu-operator-resources |
It’s time to test the GPU access from a pod. Run the below command to launch a test pod.
1 2 3 4 |
kubectl run gpu-test \ --rm -t -i \ --restart=Never \ --image=nvcr.io/nvidia/cuda:10.1-base-ubuntu18.04 nvidia-smi |
Congratulations! In less than 10 minutes, we configured a Kubernetes cluster based on containerd powered by a GPU.