There’s an estimated 62 percent of organizations using artificial intelligence (AI) in 2018, which can create big advantages for early adopters. A 2017 McKinsey study found that the healthcare, financial services and professional services industries reported a 3 to 15 percent higher profit margin after utilizing automation, and Accenture predicts that banks could boost their revenue 34 percent by 2020.
Unsurprisingly, tech giants have hopped on the trend, spending an estimated $30 billion every year on AI and machine learning (ML) in an attempt to reap the benefits of modeling. Considering there will be over 2.5 million open jobs for data professionals in the U.S. alone and not enough candidates to fill them, it is going to be tough for even leading companies to compete with these technology giants for this AI talent.
To attract, retain and amplify the talent they hire, these companies should invest in best-in-class tools that automate key parts of the AI process.
What’s the answer for these leading companies? It starts with productivity. To attract, retain and amplify the talent they hire, these companies should invest in best-in-class tools that automate key parts of the AI process. This investment in technology kills two birds with one stone. It will help these teams secure the most talented team possible (each researcher wants to work with the best tools) and get the most out of this team (by accelerating their model development process and amplifying the impact). This is a no-regret step that the best teams are making to improve their chances in this extremely competitive environment — and they are already starting to reap the benefits.
With a variety of solutions available today to help accelerate and amplify your AI efforts, it is important to start with the right technology for your particular team. For teams who have limited data science expertise in-house, they may want to start by outsourcing the development process itself to a company that produces models for their business teams. Teams who want to develop their own models, however, should focus instead on solutions that automate key tasks in the process rather than the process itself. For the latter, it is critical to start by automating the tasks that do not benefit from domain expertise. This empowers researchers to dedicate their effort on tasks that benefit from expertise while outsourcing those that don’t.
In this sense, the first step that should be automated is training and tuning. On the one hand, it is time-consuming and expensive to perform well. On the other, it does not benefit from domain expertise; that is, it requires the same solution regardless of the type of model or problem being addressed. If implemented according to best practices, automated training and tuning, in fact, becomes a process that optimizes model development, sustains performance in production, and empowers teams to scale the impact of their modeling much more quickly than otherwise achievable.
The first step to set your artificial intelligence modeling process up for success is designing a robust, repeatable and sustainable development process. This is only heightened in importance for deep learning (DL) or machine learning (ML) modeling. In this process, researchers need to collect and prepare data, engineer and analyze features, select which algorithm and framework to use, and train the model to make accurate predictions. This is a complicated and time-consuming process.
Automated tuning — or optimization — empowers teams to transform this step-by-step process into a streamlined prototyping process. Advanced approaches to automated tuning empower teams to optimize data transformation, architecture, and hyperparameter configuration search steps, therefore having a significant impact on the entire model development process. This approach to optimization can be the difference between a model that is discarded and one that delivers millions in annual product-line benefits. It empowers teams to rapidly prototype their modeling efforts with confidence and is designed to configure each model to perform best.
And, of equal importance to most teams, it automates a problem that is nearly impossible for a human to solve — imagine weighing millions, and, in many cases, billions, of possible permutations of configurations in a high dimensional space for every application. This saves expert time and accelerates the entire model development process.
Once a model has been developed, trained and tuned, it’s ready to be deployed in production and start having an impact on the business. Given that fewer than 1 in 5 models typically make it to production, getting the most performance out of each of these models is critical for maximizing business impact. Yet model performance often drifts over time as the data changes and, alongside it, the productivity of a given model configuration.
Automated tuning makes it simple to retune these models in production. By eliminating expert effort required to tune, it creates the incentive to tune as frequently as possible. Alongside regular retraining, this constant re-tuning ensures models stay as high-performing as possible in production.
As companies accelerate model development and begin to deploy more models in production, the volume, variety, and complexity of models their team produce begins to increase — in some cases, exponentially. This presents a two-part scale problem. First, teams need each layer of their stack to scale with the volume of these models. Second, they need each layer to be flexible enough to scale with the variety and complexity of different model types that are being developed. This latter problem is particularly painful in tuning, where the “no free lunch” theorem means that teams often build and maintain multiple optimizers to tune different types of models, an expensive and time-consuming proposition.
Automating tuning, however, solves this two-part scale problem for teams. To be effective, any automated approach to tuning needs to have an ensemble of algorithms capable of efficiently tuning the full variety of models a customer has. And it needs to be built to reliably deliver results as the volume of models increases. A brute force approach like grid search can tune a wide variety of techniques, but scales exponentially and fails to scale beyond even simple models. Finding an approach that can handle both problems is key to success. In this sense, advanced approaches to tuning and optimization not only set teams up to go from zero to one in model development, they set them up to go from one to one hundred as well.
Teams who are waging the war on AI talent should focus on taking steps today that will impact their AI performance tomorrow. With limited ability to dramatically impact hiring, these teams should turn their attention to building the right stack of tools for their experts. By starting with tools that automate tasks that do not benefit from expertise, they will get the most out of their technology investments and their experts. Investing in technology that automates tuning is one of the best examples of the type of decision that can dramatically improve a team’s productivity without requiring that they break the bank on AI talent.
These types of decisions will separate teams that use AI to accelerate their companies’ growth from teams who fall behind as a result.
Feature image via Pexels.
The New Stack is a wholly owned subsidiary of Insight Partners. TNS owner Insight Partners is an investor in the following companies: Shelf.