Modal Title
Edge Computing / Kubernetes / Machine Learning

Install a Nvidia GPU Operator on RKE2 Kubernetes Cluster

In this tutorial, we will walk you through the steps of installing the NVIDIA GPU Operator on Rancher’s RKE2 Kubernetes distribution.
Nov 9th, 2021 5:00am by
Featued image for: Install a Nvidia GPU Operator on RKE2 Kubernetes Cluster
Feature image: The Nvidia Container Toolkit.

In a typical GPU-based Kubernetes installation, such as for machine learning, each node needs to be configured with the correct version of Nvidia graphics driver, CUDA runtime, and cuDNN libraries followed by a container runtime such as Docker Engine, containerd, podman, or CRI-O.

Then, the Nvidia Container Toolkit is deployed to provide GPU access to the containerized applications. Nvidia device plugin for Kubernetes bridges the gap between the GPU and the container orchestrator. Finally, Kubernetes is installed, which will interact with the chosen container runtime to manage the lifecycle of workloads.

The Nvidia GPU Operator dramatically simplifies the process without manually installing the drivers, CUDA runtime, cuDNN libraries, or the Nvidia Container Toolkit. It can be installed on any Kubernetes cluster that meets specific hardware and software requirements.

When compared to the installation on the upstream Kubernetes distribution, the installation on RKE2 is slightly different. The key difference is that RKE2 comes with an embedded containerd that needs to be tweaked a bit to support the Nvidia Container Toolkit.

Once RKE2 is configured with the GPU Operator, you can run workloads such as Kubeflow and Triton Inference Server that can exploit the GPU for AI acceleration.

In this tutorial, I will walk you through all the steps of installing the Nvidia GPU Operator on Rancher’s RKE2 Kubernetes distribution.

For this setup, I am using an Ubuntu 20.04 Server running on Google Compute Engine. The VM is of type a2-highgpu-1g powered by an Nvidia Tesla A100 GPU. It has been tested with v1.21.5+rke2r2 version of RKE2 distribution. But you can use this guide on bare metal or IaaS environments that have access to an Nvidia GPU.

Step 1: Install RKE2

SSH into the instance and create the file /etc/rancher/rke2/config.yaml with the below contents:

This file contains the configuration required by RKE2 Server. Don’t forget to replace the tls-san section with the hostname, internal IP, and the external IP address of the GCE instance.

Download and run the install script for RKE2. Once it’s done, activate and enable the service to start at boot time.

Add the directory containing the Kubernetes binaries to the path, and run the kubectl command to check the status of the server.

Checking the status of the Kubernetes server.

Step 2: Install Helm and Patch Containerd Configuration

Since we will deploy the GPU operator through the Helm Chart, let’s first install Helm 3.

The next step is most crucial for deploying Nvidia GPU Operator. We will patch the configuration file to enable v2 support without which the Nvidia Container Toolkit will not run.

Restart RKE2 Server to make sure everything is intact.

systemctl restart rke2-server

Step 3: Deploy Nvidia GPU Operator on RKE2

We have everything in place to deploy the GPU operator.

Let’s add the Nvidia Helm Chart Repo, refresh Helm and install the GPU operator.

Refer to the Nvidia GPU Operator documentation for details on customizing the Helm chart values. In this case, we are essentially pointing the GPU operator to the custom container runtime class, configuration, and endpoint.

Modifying a Halm chart to accommodate GPUs

The complete installation of the GPU operator will take a few minutes. Be patient!

Step 4: Verifying and Testing the Installation of Nvidia GPU Operator

The Helm chart has created a new namespace called gpu-operator-resources

Wait for the pods in the gpu-operator-resources namespace to become ready.

Waiting for the namespace.

Finally, let’s run the famous nvidia-smi command to check if a Kubernetes pod can access the GPU.

Checking if the Kubernetes pod can access the GPU

As we can see from the output, the GPU operator has successfully installed and configured the Nvidia driver, CUDA runtime, and the Container Toolkit without any manual intervention.

Group Created with Sketch.
TNS owner Insight Partners is an investor in: Docker.
THE NEW STACK UPDATE A newsletter digest of the week’s most important stories & analyses.