Listen to all TNS podcasts on Simplecast.
For this week’s episode of the The New Stack Analysts podcast, TNS editorial director Libby Clark and TNS London correspondent Jennifer Riggins sat down (via Zoom) with futurist Martin Ford, author of “Architects of Intelligence: The truth about AI from the people building it,” and Ofer Hermoni, chair of the technical advisory council for The Linux Foundation’s Deep Learning Foundation projects, to talk about the current state of AI, how it will scale, and its consequences.
The last year alone has seen major advancements in deep learning, machine learning, and neural networks — frameworks for machine learning algorithms to work together and process complex data inputs. However, as Ford points out in this podcast, we are only at the start of the ethical implications of AI, including the implications of reduced privacy, potential weaponization, and the unconscious bias that is feeding much of the data going into these models.
Hermoni believes that the open source community — and the still very complicated AI landscape — will act as the democratization of both the future of AI technology and the necessary ethical boundaries. He talks about how to leverage open governance and common standards to make this happen.
Anticipating how artificial intelligence becomes more and more like an omnipresent commodity such as electricity, Ford and Hermoni both speak of what it will take to scale AI across society, across business, and across the stack.
Whether it will help diagnose cancer faster or to take away millions of blue-collar jobs, certainly the future of AI is affecting our present day, which makes for an always interesting discussion.
In this Edition:
1:28: What do you think is the state of artificial intelligence today from a technical and ethical standpoint?
9:26: What are some of the other concerns that technologists who are building AI into software platforms are going to need to take into consideration as this technology is created on a large scale?
12:28: Is the government involved in any way in discussions with The Linux Foundation’s open source projects, or are you self-regulating?
14:10: How does the open source community play into developing these guidelines and standards for AI, and what about regulation for the stack, what role should regulation play at that level?
15:41: Is AI as a technology ready to scale? Is it built to scale? What needs to happen for that to occur in everyday applications?
19:56: Do you think the more old-fashioned industries are going to start to see the value in AI in the next decade even, and implement machine learning?
Feature image via Pixabay.