Modal Title
Edge Computing / Kubernetes / Machine Learning

Tutorial: Deploy the Nvidia GPU Operator on Kubernetes Based on containerd Runtime

Here are the steps to install containerd, Kubernetes, and NVIDIA GPU Operator. Towards the end of the installation, we will test the GPU access by running the popular nvidia-smi command within the pod.
Jun 3rd, 2021 3:00am by
Featued image for: Tutorial: Deploy the Nvidia GPU Operator on Kubernetes Based on containerd Runtime

This tutorial will explore the steps to install Nvidia GPU Operator on a Kubernetes cluster with GPU hosts based on the containerd runtime instead of Docker Engine.

In a typical GPU-based Kubernetes installation, each node needs to be configured with the correct version of Nvidia graphics driver, CUDA runtime, and cuDNN libraries followed by a container runtime such as Docker Engine, containerd, podman, or CRI-O. Then, the Nvidia Container Toolkit is deployed to provide GPU access to the containerized applications. Finally, Kubernetes is installed, which will interact with the chosen container runtime to manage the lifecycle of workloads.

Nvidia GPU Operator dramatically simplifies the process without installing the drivers, CUDA runtime, cuDNN libraries, or the Container Toolkit. It can be installed on any Kubernetes cluster that meets specific hardware and software requirements.

Below are the steps to install containerd, Kubernetes, and Nvidia GPU Operator. Towards the end of the installation, we will test the GPU access by running the popular nvidia-smi command within the pod.

Environment

Operating system: Ubuntu 18.04 LTS Server
GPU: Nvidia GeForce RTX 3090
CPU: AMD Ryzen ThreadRipper 3990X
RAM: 128GB
HDD: 4TB NVMe SSD

Step 1: Install Containerd Runtime

Load the required modules and ensure they are persisted during reboots.



Load the sysctl parameters without rebooting the system.


Finally, install the containerd runtime.


Let’s create the default containerd configuration file.


Set the cgroup driver for runc to systemd, which is required for the kubelet.

Within the [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc] section, add the following lines:


Your config.toml should look like this:

Restart containerd with the new configuration.


Check the status of containerd runtime.


Step 2: Install Kubernetes 1.21

Start by disabling swap memory.


Install the required tools.


Let’s initialize the control plane.


Make sure you replace the IP address of 10.0.0.54 with the appropriate address of your host.

It’s time to configure kubectl CLI.


Before we can access the cluster, we need to install the CNI addon. For this tutorial, we are using Weave Net from Weave Works.


Since we only have one node, let’s remove the taint to enable scheduling.


Finally, check the status of the cluster.


Step 3 – Install Nvidia GPU Operator

Start by installing the binary of Helm3.


Add the Nvidia Helm Repository.


Since we are using the containerd runtime, let’s set that as the default.


Within a few minutes, you should see the pods in the gpu-operator-resources namespace running.


It’s time to test the GPU access from a pod. Run the below command to launch a test pod.


Congratulations! In less than 10 minutes, we configured a Kubernetes cluster based on containerd powered by a GPU.

Group Created with Sketch.
TNS owner Insight Partners is an investor in: Docker.
THE NEW STACK UPDATE A newsletter digest of the week’s most important stories & analyses.