Where are you using WebAssembly?
Wasm promises to let developers build once and run anywhere. Are you using it yet?
At work, for production apps
At work, but not for production apps
I don’t use WebAssembly but expect to when the technology matures
I have no plans to use WebAssembly
No plans and I get mad whenever I see the buzzword

Configure a Kubeflow Jupyter Notebook Server for Data Preparation

The current installment of this Kubeflow tutorial series focuses on building a Notebook Server for the data scientists to convert a set of images into a dataset ready to be used by the ML engineers to build and train a model.
Jul 23rd, 2021 11:11am by
Featued image for: Configure a Kubeflow Jupyter Notebook Server for Data Preparation
This tutorial is the latest installment in an explanatory series on Kubeflow, Google’s popular open source machine learning platform for Kubernetes. Check back each Friday for future installments.

In the last part of this series, we created the shared PVCs to enable collaboration among data scientists, machine learning engineers, and the DevOps team. Before that, we also built CPU and GPU-based container images for launching Jupyter Notebook Servers in Kubeflow.

Next, we will leverage the storage volumes and container images to build a simple machine learning pipeline based on three independent Notebook Servers. Each environment focuses on a specific task of data preparation, training, and inference.

This series aims not to build an extremely complex neural network but to demonstrate how Kubeflow helps organizations with machine learning operations (MLOps).

The current installment of this tutorial series focuses on building a Notebook Server for the data scientists to convert a set of images into a dataset ready to be used by the ML engineers to build and train a model.

We will start by uploading the ZIP file containing the images of cats and dogs from the popular Kaggle competition dataset. By the end of this tutorial, we will have two CSV files containing the train and test datasets path.

Make sure you have the shared PVCs created and visible in the Kubeflow dashboard. These PVCs will be mounted in the Notebook Server pods to write shared artifacts such as the dataset and models.

Let’s create a Notebook Server based on the Jupyter environment from the CPU-based container image created in the previous part of this tutorial. The custom container image has all the required Python modules to prepare and process the dataset.

From the Notebooks section of the navigation bar, click on the new server.

Give a name of your choice to the Notebook Server and choose the custom image option to provide the name of the Docker image built for data preparation. Depending on the available resources, allocate the number of CPUs and RAM. We don’t need a GPU for this environment.

Add a volume needed to create the personal workspace for the Notebook Server. This becomes the home directory of the user. For the data volumes, we will mount the existing shared volume, datasets created earlier. Processed data would be stored in the directory backed by this volume.

When you are ready, click the launch button to provision the Notebook Server. It may take a few minutes for the environment to become ready.

Behind the scenes, Kubeflow launched a Kubernetes statefulset based on the custom container image in the kubeflow-user-example-com namespace.

Let’s inspect the volumes section of the pod to verify if the volumes are mounted correctly.

Switch back to the Kubeflow dashboard and click on connect to access the Notebook Server. You should see the datasets directory in the environment.

Let’s get the raw dataset into the environment. Download file from Kaggle’s Dogs vs. Cats competition.

Create a directory called raw under the datasets directory, and upload the downloaded into it. Since the file is above 500MB, it may take a while to upload it.

We are now ready to process the raw data and turn it into a dataset.

Download the Jupyter Notebook from GitHub repository and upload it to the root directory of the Notebook Server.

Launch the Jupyter Notebook and run each cell to start processing the dataset.

We import the required Python modules already installed in the custom container image.

Next, we will unzip the raw dataset and inspect it.

Let’s inspect the dataset by accessing the first few images from each class — dogs and cats.

We will now parse the files in the directory and generate a list for each category.

We now have two lists – train and val – containing the path to the files from each category. Let’s take the help of Pandas library to turn them into CSV files.

At this point, the datasets directory has two CSV files that act as the training and validation dataset for the model we build in the next section.

With the dataset in place, we are all set to launch the training environment to build and train a convolutional neural network to classify the images. Stay tuned for the next part, which focuses on training.

Group Created with Sketch.
TNS owner Insight Partners is an investor in: Docker.
THE NEW STACK UPDATE A newsletter digest of the week’s most important stories & analyses.