Artificial Intelligence: It Takes a Human Touch

Companies looking to integrate artificial intelligence into their businesses shouldn’t be looking for the “perfect” programmer because it takes more than algorithm expertise to effectively use the technology, according to Cassie Kozyrkov, chief decision scientist for Google Cloud.
“AI is a team sport,” Kozyrkov said during a talk at #BCTechSummit in Vancouver.
She likened it to knowing how to run a big restaurant kitchen versus knowing how to wire a microwave.
Moderator Dr. Bethany Edmunds, associate dean at British Columbia Institute of Technology, prompted Kozyrkov’s comments by asking how the industry can produce the talent this trend toward AI will require.
In the kitchen analogy, Kozyrkov pointed out that a successful restaurant has to have someone who understands customers’ tastes, who comes up with the recipes, someone who shops for the best ingredients, someone who knows how to cook and myriad other things.
Similarly, to look solely to programmers to make AI work — well, it just won’t work, she said. In addition, it takes reliability engineers, statisticians, philosophers, ethicists and others.
Suzanne Gildert, CEO of robotics firm Sanctuary AI, added her list of necessary skills: roboticists, mechanical and electrical engineers, designers, artists, neuroscientists and psychologists.
She referred to Nick Bostrom’s “paperclip maximizer” thought experiment in his book “Superintelligence” as an example of how pure programming can go wrong. In the story, the system, told to amass as many paper clips as possible, goes to ridiculous links to keep producing paper clips at the cost of everything else.
Part of the problem is the lack of a standard definition of artificial intelligence. Kozyrkov said her approach, say, to teaching a system to identify pictures of cats, is to provide data labeled “cats” and “not cats” and letting the system figure it out. The more data the better.
Gildert maintains we don’t really know what intelligence is, but her approach is trying to train machines to think like humans do. That takes a multidisciplinary approach.
You have to think about what you want the system to do. Does the system have proper goals and rewards in place? And until systems develop the ability to question the goals — such as the need to go to ridiculous lengths to make more and more paper clips — we won’t have superintelligent systems, she said.
Planned or not, humans will put their own biases into the systems, pointed out AJung Moon, who has made a business focused around the ethics of smart systems.
She was interviewed in a session called “Should We Fear the Robots?” Her answer: Yes and no.
CEO and technical analyst at Generation R Consulting and director of the Open Roboethics Institute, she has consulted with leaders in Canada’s Parliament about the ethical issues related to AI. Multiple nations are considering regulation in light of the less-than-transparent Cambridge Analytica revelations.
Privacy and transparency are among the issues that have to be carefully considered, she said.
Generation R worked with Technical Safety BC, the government bureau overseeing public safety, on an ethics roadmap for AI systems.
“Evidence suggests that it is much harder to revert negative effects of predictive models that are already deployed in a community than to prevent undesirable effects during the design and development of the technology,” it states, pointing to the need to address these challenges early in the design and deployment of a technology.
“One designer can make a decision and that becomes ‘policy,’” Moon said, using air quotes.
While she does not foresee Terminator-style robots in our future, she said there is a danger of systems designed without basic human values in mind.
Gildert foresees pieces of cognitive architecture that already exist — libraries on faces, language, motor activity, for instance — coming together in new and interesting ways.
Kozyrkov suggested using a pause in production to stop and re-evaluate all these issues. By focusing on the sci-fi vision of artificial intelligence, companies are likely to miss the business opportunity it can offer, she said.
Though critical of the idea that data is magic — of course you have to determine whether your data is accurate and useful — she urged businesses to take advantage of algorithms already out there and apply them to their particular needs.
Google Cloud is a sponsor of The New Stack.