Deploying Scalable Machine Learning Models for Long-Term Sustainability
The promise of machine learning (ML) models are proliferating enterprises, but deploying them to the cloud or edge computing environments is proving to be a significant challenge to scale across a plethora of tools and frameworks. According to Algorithmia’s “2020 State of Enterprise ML,” on average, 40% of companies said it takes more than a month to deploy an ML model into production.“
In this episode of The New Stack Makers podcast recorded at AWS re:Invent, Luis Ceze, co-founder and CEO of OctoML talks about how to optimize and automate the deployment of machine learning models on any hardware, cloud or edge devices.
Alex Williams, founder and publisher of The New Stack, hosted this podcast.
From automating machine learning models to design, management, and optimization, the ability to quickly deliver these models into production while saving costs and sustaining them over time is challenging developer productivity. “Machine learning models are very compute-intensive and once you’re ready to deploy a model, it requires you to understand the hardware and really tune in to optimize it to get it ready for deployments,” said Ceze.
Based on Apache Tensor Virtual Machine (TVM), an open source machine learning compiler framework created by Ceze and his co-founders, OctoML built on this tool that enables developers to automatically optimize machine learning and deploy them to run at scale on any hardware. “What Apache TVM does is create a set of common primitives across all sorts of different hardware… Then it uses machine learning internally to produce efficient machine learning code. The reason this is important is that the work done to get a model ready for deployment involves a lot of manual software engineering,” Ceze said.
By using machine learning, Apache TVM helps companies streamline the insights needed to deploy machine learning models, Ceze said. “Once you get a machine learning model, there’s billions of ways in which you can actually produce the code to represent your model and run it on the target hardware.” Ceze asked, “How do you pick the fastest one when you don’t have time to run them all? And there are billions of them, it’s not practical.”
With a complex toolchain and fragmented ecosystem, the practice of putting machine learning models into production will be seemingly overwhelming as “models get updated frequently because the data changes or better ideas can make the model more accurate and efficient for whatever it needs to do so there’s a need to train the model again,” said Ceze. “If you don’t have automation, every time there’s a change in your model, you have to do manual work to get it ready for the delivery cycle.”