Train a TensorFlow Model with a Kubeflow Jupyter Notebook Server

In the last part of this series, we launched a custom Jupyter Notebook Server backed by a shared PVC to prepare and process the raw dataset. In the current tutorial, let’s utilize the dataset to train and save a TensorFlow model. The saved model will be stored in another shared PVC which will be accessed by the deployment Notebook Server.
Remember, this series aims not to build an extremely complex neural network but to demonstrate how Kubeflow helps organizations with machine learning operations (MLOps).
Before proceeding with this tutorial, make sure you completed the steps explained in the previous part of the series.
Since the training Notebook Server may exploit available GPUs, we built the custom container image optimized for GPU. If you have not built the container images, refer to the first part of the series.
The training Notebook Server loads the dataset saved by the data preparation Notebook Server while persisting the model to the models
directory backed by the second PVC which also supports RWX access mode.
Let’s go ahead and launch the Notebook Server.
Since the training environment needs higher computer power, we assigned four CPUs and 4GiB of RAM. If a GPU is available in the cluster, you can associate it with the Notebook Server.
Let’s add a new PVC volume that becomes the home directory of the notebook. Apart from that, we will attach existing PVCs — datasets
and models
— to share the artifacts. The datasets
directory contains the pre-processed dataset created by the data scientists. We will populate the models
directory with the TensorFlow model.
Go ahead and launch the Notebook Server. You should now have the data preparation and model training servers visible in the Kubeflow dashboard.
The training Notebook Server is essentially a Kubernetes pod that belongs to the statefulset. You can verify this with the below kubectl command:
1 |
kubectl get pods -n kubeflow-user-example-com |
When you connect to the notebook, you will see two shared directories based on the PVCs that we mounted earlier.
Download the Jupyter Notebook for training from the GitHub repository, upload it to your environment, and open it.
Let’s start by importing the Python libraries and modules.
1 2 3 4 5 6 |
import numpy as np import pandas as pd import tensorflow as tf from tensorflow.keras.preprocessing.image import ImageDataGenerator,load_img from tensorflow.keras.utils import to_categorical |
We will check if there is a GPU accessible to the Notebook Server. In my environment, I have a GPU node available in the Kubeflow cluster.
1 |
tf.config.list_physical_devices('GPU') |
From the datasets
shared directory, let’s load the train and validation datasets.
1 2 |
train_df=pd.read_csv('datasets/dogs_vs_cats-train.csv') validate_df=pd.read_csv('datasets/dogs_vs_cats-val.csv') |
We will define the image characteristics expected by the neural network.
1 2 3 4 |
Image_Width=128 Image_Height=128 Image_Size=(Image_Width,Image_Height) Image_Channels=3 |
Let’s go ahead and define the neural network to train the model.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 |
from tensorflow.keras.models import Sequential from tensorflow.keras.layers import Conv2D,MaxPooling2D,\ Dropout,Flatten,Dense,Activation,\ BatchNormalization model=Sequential() model.add(Conv2D(32,(3,3),activation='relu',input_shape=(Image_Width,Image_Height,Image_Channels))) model.add(BatchNormalization()) model.add(MaxPooling2D(pool_size=(2,2))) model.add(Dropout(0.25)) model.add(Conv2D(64,(3,3),activation='relu')) model.add(BatchNormalization()) model.add(MaxPooling2D(pool_size=(2,2))) model.add(Dropout(0.25)) model.add(Conv2D(128,(3,3),activation='relu')) model.add(BatchNormalization()) model.add(MaxPooling2D(pool_size=(2,2))) model.add(Dropout(0.25)) model.add(Flatten()) model.add(Dense(512,activation='relu')) model.add(BatchNormalization()) model.add(Dropout(0.5)) model.add(Dense(2,activation='softmax')) model.compile(loss='categorical_crossentropy',optimizer='rmsprop',metrics=['accuracy']) |
This is a simple convolutional neural network with 17 layers. You can visualize this with the below statement:
1 |
model.summary() |
Next, we will define the hyperparameters for training.
1 2 3 4 |
from tensorflow.keras.callbacks import EarlyStopping, ReduceLROnPlateau earlystop = EarlyStopping(patience = 10) learning_rate_reduction = ReduceLROnPlateau(monitor = 'val_accuracy',patience = 2,verbose = 1,factor = 0.5,min_lr = 0.00001) callbacks = [earlystop,learning_rate_reduction] |
Let’s augment the data through the ImageDataGenerator:
1 2 3 4 5 6 |
train_df = train_df.reset_index(drop=True) validate_df = validate_df.reset_index(drop=True) total_train=train_df.shape[0] total_validate=validate_df.shape[0] batch_size=15 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 |
train_datagen = ImageDataGenerator(rotation_range=15, rescale=1./255, shear_range=0.1, zoom_range=0.2, horizontal_flip=True, width_shift_range=0.1, height_shift_range=0.1 ) train_generator = train_datagen.flow_from_dataframe(train_df, None,x_col='filename',y_col='category', target_size=Image_Size, class_mode='categorical', batch_size=batch_size) validation_datagen = ImageDataGenerator(rescale=1./255) validation_generator = validation_datagen.flow_from_dataframe( validate_df, None, x_col='filename', y_col='category', target_size=Image_Size, class_mode='categorical', batch_size=batch_size ) |
We are now ready to train by calling model.fit
method with the network and parameters created above.
1 2 3 4 5 6 7 8 9 |
epochs=20 history = model.fit( train_generator, epochs=epochs, validation_data=validation_generator, validation_steps=total_validate//batch_size, steps_per_epoch=total_train//batch_size, callbacks=callbacks ) |
Depending on the compute horsepower available, this step can take a few minutes.
Let’s plot the accuracy and loss of the training and validation datasets for each epoch.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 |
import matplotlib.pyplot as plt acc = history.history[ 'accuracy' ] val_acc = history.history[ 'val_accuracy' ] loss = history.history[ 'loss' ] val_loss = history.history['val_loss' ] epochs = range(len(acc)) plt.plot ( epochs, acc ) plt.plot ( epochs, val_acc ) plt.title ('Training and validation accuracy') plt.figure() plt.plot ( epochs, loss ) plt.plot ( epochs, val_loss ) plt.title ('Training and validation loss' ) |
With over 90%, our model has reached a decent accuracy level.
It’s time for us to save the model to the shared directory in TensorFlow’s SavedModel format.
1 2 |
!mkdir -p models/1 model.save("models/1") |
We now have a trained model that can classify the images of dogs and cats. In the next part of this series, we will deploy this model for inference. Stay tuned.