Get, Post or Go Home?
Should GET and POST be the only HTTP request methods used by developers?
Yes, GET and POST are the only verbs needed.
No, DELETE, PATCH and other requests have their place.

Tutorial: Build Custom Container Images for a Kubeflow Notebook Server

In this KubeFlow tutorial, we will build an end-to-end machine learning pipeline for data preparation, training, and inference.
Jul 2nd, 2021 6:00am by
Featued image for: Tutorial: Build Custom Container Images for a Kubeflow Notebook Server
This tutorial is the latest installment in an explanatory series on Kubeflow, Google’s popular open source machine learning platform for Kubernetes

In this installment, we will start exploring building an end-to-end machine learning pipeline for data preparation, training, and inference. We will delve deeper into these tasks over the next few installments.

To follow this guide, you need to have Kubeflow installed in your environment with a storage engine like Portworx supporting shared volumes for creating PVCs with RWX support.

We will train a Convolutional Neural Network (CNN) to classify the images of dogs and cats. The objective is not to train the most sophisticated model but explore how to build ML pipelines with Kubeflow Notebook Server. This scenario will be further extended to run MLOps based on Kubeflow Pipelines. For model serving, we will leverage KFServing, one of the core building blocks of Kubeflow.

In the first part of this series, we will build custom container images for the Kubeflow Notebook Server that we will use in the remainder of this tutorial.


There are three independent steps involved in this exercise: data preparation, model training, and inference. Each step is associated with a dedicated Jupyter Notebook Server environment. The data preparation and inference environments will target the CPU, while the Jupyter Notebook used for training will run on a GPU host.

Data scientists will perform the data processing task and save the final dataset to a shared volume used by machine learning engineers training the model. The trained model is stored in another shared volume used by DevOps engineers to package and deploy the model for inference.

The following image depicts how the Notebook Servers leverage the storage engine.

Each Jupyter Notebook Server uses its own container image with the appropriate tools and frameworks. This gives the teams the flexibility they need to run their respective tasks.

Once the custom container images are built, and storage is configured, our Kubeflow environment will look like the below screenshot:

While the whole pipeline can be executed with just one Notebook Server without shared folders, production deployments need isolated environments for each team.

Build the Container Images

Since each Notebook Server has a well-defined role, we will build dedicated images for each environment.

Let’s start with the image for data preparation and pre-processing. This will contain modules such as Pandas and Matplotlib useful for processing and analyzing data.

Let’s call this file Dockerfile.prep

The commands used in the above Dockerfile are self-explanatory. We start with the base image of Ubuntu 18.04 and then install Python 3 along with Pip. We then install the required Python packages followed by the essential command-line tools. Finally, we expose port 8888 for accessing the Jupyter Hub web interface and launch the notebook with the right set of parameters.

If you are wondering why we are using the environment variable NB_PREFIX, then refer to the Kubeflow docs to understand how the controller uses the variable to configure the URL. Essentially, the Kubeflow notebook controller manages the base URL for the notebook server using the environment variable, NB_PREFIX.

Build the image and push it to Docker Hub or any other image registry.

docker build -t janakiramm/dataprep -f Dockerfile.prep .

docker push janakiramm/dataprep

It’s time for building the image for the training Notebook Server. We will use the latest TensorFlow image with support for GPU.

We start with the base image of TensorFlow GPU that comes with all the required CUDA libraries. We then install the required Python libraries and OS tools before exposing the Jupyter Notebook URL.

Build and upload this image to the registry.

docker build -t janakiramm/train -f Dockerfile.train .

Finally, for model serving and testing, we will create a TensorFlow CPU-based image.

Build and push the image to the registry.

docker build -t janakiramm/train -f Dockerfile.infer .

docker push janakiramm/infer

In the next part of this tutorial, we will configure the Kubernetes Storage Classes, Persistent Volumes required to run the Notebook Servers. Stay tuned.

Group Created with Sketch.
TNS owner Insight Partners is an investor in: Docker.
THE NEW STACK UPDATE A newsletter digest of the week’s most important stories & analyses.