Modal Title
Machine Learning

Train a TensorFlow Model with a Kubeflow Jupyter Notebook Server

This series aims to demonstrate how Kubeflow helps organizations with machine learning operations (MLOps).
Jul 30th, 2021 7:35am by
Featued image for: Train a TensorFlow Model with a Kubeflow Jupyter Notebook Server
This tutorial is the latest installment in an explanatory series on Kubeflow, Google’s popular open source machine learning platform for Kubernetes. Check back each Friday for future installments.

In the last part of this series, we launched a custom Jupyter Notebook Server backed by a shared PVC to prepare and process the raw dataset. In the current tutorial, let’s utilize the dataset to train and save a TensorFlow model. The saved model will be stored in another shared PVC which will be accessed by the deployment Notebook Server.

Remember, this series aims not to build an extremely complex neural network but to demonstrate how Kubeflow helps organizations with machine learning operations (MLOps).

Before proceeding with this tutorial, make sure you completed the steps explained in the previous part of the series.

Since the training Notebook Server may exploit available GPUs, we built the custom container image optimized for GPU. If you have not built the container images, refer to the first part of the series.

The training Notebook Server loads the dataset saved by the data preparation Notebook Server while persisting the model to the models directory backed by the second PVC which also supports RWX access mode.

Let’s go ahead and launch the Notebook Server.

Since the training environment needs higher computer power, we assigned four CPUs and 4GiB of RAM. If a GPU is available in the cluster, you can associate it with the Notebook Server.

Let’s add a new PVC volume that becomes the home directory of the notebook. Apart from that, we will attach existing PVCs — datasets and models — to share the artifacts. The datasets directory contains the pre-processed dataset created by the data scientists. We will populate the models directory with the TensorFlow model.

Go ahead and launch the Notebook Server. You should now have the data preparation and model training servers visible in the Kubeflow dashboard.

The training Notebook Server is essentially a Kubernetes pod that belongs to the statefulset. You can verify this with the below kubectl command:


When you connect to the notebook, you will see two shared directories based on the PVCs that we mounted earlier.

Download the Jupyter Notebook for training from the GitHub repository, upload it to your environment, and open it.

Let’s start by importing the Python libraries and modules.


We will check if there is a GPU accessible to the Notebook Server. In my environment, I have a GPU node available in the Kubeflow cluster.


From the datasets shared directory, let’s load the train and validation datasets.


We will define the image characteristics expected by the neural network.


Let’s go ahead and define the neural network to train the model.


This is a simple convolutional neural network with 17 layers. You can visualize this with the below statement:


Next, we will define the hyperparameters for training.


Let’s augment the data through the ImageDataGenerator:



We are now ready to train by calling model.fit method with the network and parameters created above.


Depending on the compute horsepower available, this step can take a few minutes.

Let’s plot the accuracy and loss of the training and validation datasets for each epoch.


With over 90%, our model has reached a decent accuracy level.

It’s time for us to save the model to the shared directory in TensorFlow’s SavedModel format.


We now have a trained model that can classify the images of dogs and cats. In the next part of this series, we will deploy this model for inference. Stay tuned.

Group Created with Sketch.
THE NEW STACK UPDATE A newsletter digest of the week’s most important stories & analyses.