Tutorial: Build Custom Container Images for a Kubeflow Notebook Server

In this installment, we will start exploring building an end-to-end machine learning pipeline for data preparation, training, and inference. We will delve deeper into these tasks over the next few installments.
To follow this guide, you need to have Kubeflow installed in your environment with a storage engine like Portworx supporting shared volumes for creating PVCs with RWX support.
We will train a Convolutional Neural Network (CNN) to classify the images of dogs and cats. The objective is not to train the most sophisticated model but explore how to build ML pipelines with Kubeflow Notebook Server. This scenario will be further extended to run MLOps based on Kubeflow Pipelines. For model serving, we will leverage KFServing, one of the core building blocks of Kubeflow.
In the first part of this series, we will build custom container images for the Kubeflow Notebook Server that we will use in the remainder of this tutorial.
Overview
There are three independent steps involved in this exercise: data preparation, model training, and inference. Each step is associated with a dedicated Jupyter Notebook Server environment. The data preparation and inference environments will target the CPU, while the Jupyter Notebook used for training will run on a GPU host.
Data scientists will perform the data processing task and save the final dataset to a shared volume used by machine learning engineers training the model. The trained model is stored in another shared volume used by DevOps engineers to package and deploy the model for inference.
The following image depicts how the Notebook Servers leverage the storage engine.
Each Jupyter Notebook Server uses its own container image with the appropriate tools and frameworks. This gives the teams the flexibility they need to run their respective tasks.
Once the custom container images are built, and storage is configured, our Kubeflow environment will look like the below screenshot:
While the whole pipeline can be executed with just one Notebook Server without shared folders, production deployments need isolated environments for each team.
Build the Container Images
Since each Notebook Server has a well-defined role, we will build dedicated images for each environment.
Let’s start with the image for data preparation and pre-processing. This will contain modules such as Pandas and Matplotlib useful for processing and analyzing data.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 |
FROM ubuntu:18.04 RUN apt-get update && apt-get install -y \ python3 \ python3-pip RUN python3 -m pip --no-cache-dir install --upgrade \ "pip<20.3" \ setuptools RUN python3 -m pip install --no-cache-dir \ jupyter \ matplotlib \ pandas \ scipy \ imutils \ opencv-python RUN apt-get install -y --no-install-recommends \ zip \ unzip \ wget \ git \ libgl1-mesa-glx EXPOSE 8888 ENV NB_PREFIX / CMD ["bash","-c", "jupyter notebook --notebook-dir=/home/jovyan --ip=0.0.0.0 --no-browser --allow-root --port=8888 --NotebookApp.token='' --NotebookApp.password='' --NotebookApp.allow_origin='*' --NotebookApp.base_url=${NB_PREFIX}"] |
Let’s call this file Dockerfile.prep
The commands used in the above Dockerfile are self-explanatory. We start with the base image of Ubuntu 18.04 and then install Python 3 along with Pip. We then install the required Python packages followed by the essential command-line tools. Finally, we expose port 8888 for accessing the Jupyter Hub web interface and launch the notebook with the right set of parameters.
If you are wondering why we are using the environment variable NB_PREFIX
, then refer to the Kubeflow docs to understand how the controller uses the variable to configure the URL. Essentially, the Kubeflow notebook controller manages the base URL for the notebook server using the environment variable, NB_PREFIX.
Build the image and push it to Docker Hub or any other image registry.
docker build -t janakiramm/dataprep -f Dockerfile.prep .
docker push janakiramm/dataprep
It’s time for building the image for the training Notebook Server. We will use the latest TensorFlow image with support for GPU.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 |
FROM tensorflow/tensorflow:latest-gpu-jupyter RUN /usr/bin/python3 -m pip install --upgrade pip RUN pip install pandas \ sklearn \ scipy \ matplotlib \ imutils \ opencv-python RUN apt-get update RUN apt-get install -y git \ wget \ libgl1-mesa-glx ENV NB_PREFIX / CMD ["sh","-c", "jupyter notebook --notebook-dir=/home/jovyan --ip=0.0.0.0 --no-browser --allow-root --port=8888 --NotebookApp.token='' --NotebookApp.password='' --NotebookApp.allow_origin='*' --NotebookApp.base_url=${NB_PREFIX}"] |
We start with the base image of TensorFlow GPU that comes with all the required CUDA libraries. We then install the required Python libraries and OS tools before exposing the Jupyter Notebook URL.
Build and upload this image to the registry.
docker build -t janakiramm/train -f Dockerfile.train .
Finally, for model serving and testing, we will create a TensorFlow CPU-based image.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 |
FROM tensorflow/tensorflow:latest-jupyter RUN /usr/bin/python3 -m pip install --upgrade pip RUN pip install pandas \ sklearn \ scipy \ matplotlib \ imutils \ opencv-python RUN apt-get update RUN apt-get install -y git \ wget \ libgl1-mesa-glx ENV NB_PREFIX / CMD ["sh","-c", "jupyter notebook --notebook-dir=/home/jovyan --ip=0.0.0.0 --no-browser --allow-root --port=8888 --NotebookApp.token='' --NotebookApp.password='' --NotebookApp.allow_origin='*' --NotebookApp.base_url=${NB_PREFIX}"] |
Build and push the image to the registry.
docker build -t janakiramm/train -f Dockerfile.infer .
docker push janakiramm/infer
In the next part of this tutorial, we will configure the Kubernetes Storage Classes, Persistent Volumes required to run the Notebook Servers. Stay tuned.