Modal Title
Kubernetes / Machine Learning

Use Kubernetes to Speed Machine Learning Development

Oct 12th, 2018 8:25am by
Featued image for: Use Kubernetes to Speed Machine Learning Development

Justin Brandenburg
Justin Brandenburg is a Data Scientist in the MapR Professional Services group. Justin has experience in a number of data areas ranging from counter-narcotics to cyber intrusion analysis. In past projects, he has utilized machine learning, econometrics, graph analytics and agent-based modeling to fulfill the customer needs. He has an undergraduate degree in Economics from Va Tech, a Masters in Economics from Johns Hopkins University and a Masters in Computational Social Science from George Mason University.

As industries shift to a microservices approach of deploying applications using containers, data scientists can reap the benefits. Data Scientists use specific frameworks and operating systems that can often conflict with the requirements of a production system. This has led to many clashes between IT and R&D departments. IT is not going to change the OS to meet the needs of a model that needs a specific framework that won’t run on RHEL 7.2.

Containers allow a data scientist to construct self-contained environments that package up necessary dependencies and logic. This also allows the data scientist a seat at the table as discussions move from DevOps to DataOps. As data arrives and is parsed for value, containers that perform specific tasks can be staged along the way, creating a machine learning workflow on new incoming data that was not possible just a few years ago.

Data scientists can deploy multiple containers to account for adjustments in the data or variations in their model. This allows for an organization to run models in parallel to evaluate and then choose which one they find more valuable because it was applied on new real-time data and not optimized on historical data.

For this example, I installed Docker and Kubernetes using kubeadm on AWS ec2 instances to create a two-node kubernetes cluster running centos 7.5:


Data Scientists typically development, train, test and optimize their models in an R&D environment that can be configured to meet their needs. Here is a Tensorflow model I wrote in a sandbox that applies a Recurrent Neural Network on simulated time series data.


I built this in the R&D environment, but now I want to move it over to the production environment. I will use Docker to build an image that I can then put my model into and deploy using Kubernetes.

First I will create a Dockerfile that will allow me to construct an image with an Ubuntu OS and install the dependencies and packages my model needs to function.


I am going to use the Tensorflow Serving API to execute and save my model within a Docker container.  Next, build the image and then run it:


Copy the model in and then run the python script. The model parameters will be saved in the /tmp/ folder within the container. To exit the container and have it keep running in the background press Ctrl+P and Ctrl+Q. I need to persist the changes I made to the justin-tf_serving container in order for my model data to permanently remain. Retrieve the container ID and commit the changes into a new image called tf_kube1.


Kubernetes allows you to pull images from a private or local image hub, but for the purpose of this example, we will push and then pull our new image from Docker Hub. Lock in with your username and password.


Once our image is on Docker Hub we need to specify how we want Kubernetes to use the image on our cluster.  We do this via a .yaml file. We setting up a deployment of containers that will also be running as a service.


Kubernetes is now running our Docker image that contained the trained Tensorflow model we created. Now we can push new data through the model and our model can evaluate this data and give us our results. We could have created second images with adjustments in the model hyperparameters and our pods could be running Model A and Model B side by side to compare results.

Our models have all they need to run in their containers. The containers are configured to run in the production environment. Kubernetes will let us specify resources to improve efficiency in the compute allocated to our models and will let us know if a container is not performing as it should.

As recently as two years ago, once I had performed my analysis and gained insight from data, I was never able to take the next step and deploy this insight. I would write a report, send an email, or present some slides, but my value was limited to only what decision makers would do with it. Transferring my workflow logic and model into a production-ready application required the approval of many people and the dedication of a software developer. In a dynamic industry, this lag could allow the data to change which would the model results to be less meaningful.

With developments in containers and Kubernetes, this doesn’t need to be the case any longer. The value of data science is determined by the insight it gives into data. This value can only increase as the ability to solve challenges in real time becomes more available.

Feature image via Pixabay.

Group Created with Sketch.
TNS owner Insight Partners is an investor in: Docker.
THE NEW STACK UPDATE A newsletter digest of the week’s most important stories & analyses.