TNS
VOXPOP
How has the recent turmoil within the OpenAI offices changed your plans to use GPT in a business process or product in 2024?
Increased uncertainty means we are more likely to evaluate alternative AI chatbots and LLMs.
0%
No change in plans, though we will keep an eye on the situation.
0%
With Sam Altman back in charge, we are more likely to go all-in with GPT and LLMs.
0%
What recent turmoil?
0%
Cloud Native Ecosystem / Kubernetes

Tensorflow Model Deployment and Inferencing with Kubeflow

In this series on Kubeflow Jupyter Notebook Servers, we explore end-to-end MLOps scenario of configuring the environment, performing data preparation, training, deployment, and inference.
Aug 6th, 2021 3:00am by
Featued image for: Tensorflow Model Deployment and Inferencing with Kubeflow
Feature image via Pixabay.
This tutorial is the last installment in an explanatory series on Kubeflow, Google’s popular open source machine learning platform for Kubernetes.

In the last part of this series, we trained a Tensorflow model to classify the images of cats and dogs. The model is stored in a shared Kubernetes persistent volume claim (PVC) which can be accessed by another Kubeflow Notebook Server to test the model.

Remember, this series aims not to build an extremely complex neural network but to demonstrate how Kubeflow helps organizations with machine learning operations (MLOps).

Launch a new CPU-based Jupyter Notebook Server and upload the notebook available on GitHub. This notebook validates the model by passing a few images.

Follow the same steps to launch the Notebook Server based on the image, janakiramm/infer. Make sure you mount the shared PVC – models.

This notebook loads the TensorFlow model and performs the classification based on sample images.

The infer function accepts a file and returns the prediction.

Let’s now deploy the model in TensorFlow Serving running in Kubernetes. Start by cloning the Github repository that has everything we need to run the inference code.


Navigate to the inference directory to find the YAML files and other related assets.

Let’s deploy TensorFlow Serving in the kubeflow-user-example-com namespace and expose it as a NodePort service. It’s the same namespace where the Jupyter Notebook Servers are running.


Below are YAML specifications for the TF Serving deployment and service.



We are essentially mounting the same PVC used by the Jupyter Notebook Servers to serve the model.

The TF Serving endpoint is available as a NodePort on the Kubeflow cluster.

Since Kubeflow relies on Istio for authorizing requests, we need to apply an authorization policy to allow requests to TF Serving.



It’s time to invoke the endpoint from a Python Client. Let’s create a virtual environment and install the required modules.



Below is the Python client code we use for inference.


Let’s run the Python client by passing the TF Serving URL and a sample image. When sending sample1.jpg, we see the prediction as a dog and a cat when using sample2.jpg.



Replace HOST with an appropriate IP and port-based on your cluster and the TF Serving NodePort service.



As you can see, the classification is accurate for the images that we sent.

This concludes the series on Kubeflow Jupyter Notebook Servers where we explored the end-to-end MLOps scenario of configuring the environment, performing data preparation, training, deployment, and inference.

Group Created with Sketch.
THE NEW STACK UPDATE A newsletter digest of the week’s most important stories & analyses.