Machine Learning / Sponsored

Machine Learning Challenges Now More about Engineering than Research

30 Jul 2019 5:00pm, by

KubeCon + CloudNativeCon sponsored this podcast.

What the Paradigmatic Shift in Machine Learning Means for DevOps

Also available on Apple Podcasts, Google Podcasts, Overcast, PlayerFM, Pocket Casts, Spotify, Stitcher, TuneIn

Artificial intelligence (AI) and machine learning (ML) have been under study for years. But what is new is the tremendous impact and overlap AI and ML are now having on application development. Among the benefits, we will continue to see applications based on massive amounts of data sets that make use of human brain-like neural network computing to perform tasks in minutes or seconds that previously required thousands of human hours to perform. On a more practical level, AI is used to automate some of the more rudimentary tasks in software production pipelines, while freeing up time for developers to focus on more creative and intellectually rewarding work.

During this episode of The New Stack Makers podcast, Dr. Han Xiao, engineering lead at Tencent AI Lab, and Alejandro Saucedo, chief scientist at the Institute for Ethical AI and Machine Learning, discussed AI’s and ML’s power for today’s at-scale application development and deployments.

Today, at-scale application development increasingly requires developers to feed huge amounts of data “data fast enough” in order to train the model fast enough to get the correct representation,” Xiao said. “After the paradigm shift, deep learning and AI have become more of an engineering problem, rather than a research [challenge].”

Among of the major innovations taking place in the AI and ML space, there has been a focus on reliability, scalability and solving the problems associated with complex and far-reaching datasets, Xiao said. “How can I adapt this AI algorithm from one domain to multiple domains?” Xiao said. “Or How can I scale this era with in order to serve billions of customers?”

AI- and ML-assisted application development has also raised a number of ethical concerns. As the chief scientist at the Institute for Ethical AI and Machine Learning, Saucedo noted that the challenges “domain experts” face in this new world. These are the people, Saucedo said, who “would normally be the ones that would know… the regulations, limitations or ethical frameworks that need to be in place.”

The concern now largely lies with the ethical frameworks associated with the application models released today. “The challenge here is that now we’re trying to standardize the way that machine learning really operates. And what we do is primarily focus on standardizing these concepts for higher-level frameworks all the way to the ethical frameworks, and then trying to break it down into standards and processes and independent practices to cover the whole spectrum,” Saucedo said.

For more insight into AI innovation, engineering and open-source, see Xio’s recent post.

Alex Williams, founder and editor in chief of The New Stack, hosted this podcast, which was recorded at KubeCon + CloudNativeCon China.

In this Edition:

2:33: Early research
12:31: Current research
13:21: How cases have evolved
23:05: The intersection between data science and application architecture development
25:48: How AI and ML move forward
28:36: Kubernetes