Culture / Machine Learning

Artificial Intelligence Research is Awash in Dudes, and That Could Be a Problem

4 Jul 2016 7:01am, by

Our AI systems may be suffering from a lack of diversity in who builds them, a recent Bloomberg Technology article charged.

“I call it a sea of dudes,” said Margaret Mitchell, the only female researcher in Microsoft’s “cognition” group. Over the last five years Mitchell’s worked with hundreds of men — and approximately ten women. “I do absolutely believe that gender has an effect on the types of questions that we ask. You’re putting yourself in a position of myopia,” she said.

In other words, if artificial intelligence researchers want their systems ultimately to assume a central place in people’s lives, then these systems should be influenced by input from all people, not just men.

But an even more startling statistic emerged last week from a startup named Textio, which helps companies get more diverse applicants by re-wording their job listings. “Common phrases that exert a bias effect that don’t show up on any qualitative checklists include exhaustive, enforcement, and fearless (all masculine-tone) and transparent, catalyst, and in touch with (all feminine-tone),” explained a recent blog post by Textio’s CEO. “You can only find these patterns by measuring over time with an enormous data set of real hiring outcomes — exactly what Textio’s engine is designed to do.”

Textio analyzed 78,768 engineering job listings to determine which sector had the highest gender bias in their listings. The answer? The machine intelligence sector, where job posts for software engineers had a gender-bias score more than twice as high as any other sector. The next three highest-scoring sectors — for gender bias in their job postings — were back-end engineering, full-stack engineering, and general software engineering.

“I’ve concluded that it’s not just me: machine intelligence has a bias problem,” wrote Textio’s CEO and co-founder, Kieran Snyder, who holds a Ph.D. in linguistics. “And given the enormous financial opportunity that learning loop companies are sitting on, it’s a bias problem with vast cultural ramifications.”

Other data examined by Bloomberg also suggested that women may be under-represented in AI research. For example, figures from Montreal’s conference on Neural Information Processing Systems showed that only 13.7 percent of the attendees were female. Bloomberg also interviewed Fei-Fei Li, the only female in Stanford’s 15-member AI lab, who said she wasn’t even surprised by the low attendance numbers because she almost never sees female researchers in articles about artificial intelligence.

“For every woman who has been quoted about AI technology, there are a hundred more times men were quoted,” she said.

But what’s most disturbing is that percentage is even lower than the number of female computer science graduates, which has dropped from a peak of 37 percent to just 17 percent, according to a recent speech by Melinda Gates. “You want women participating in all of these things because you want a diverse environment creating AI and tech tools and everything we’re going to use,” she told an audience at the Recode conference last month, according to Bloomberg.

If the goal is to detect patterns in data, a lack of diversity could have a measurable impact. Katherine Heller, the executive director for a group called Women in Machine Learning, told Bloomberg that “Some of the cultural issues that play into women not being involved in the field could also lead to important questions not being asked in terms of someone’s research agenda.”

Margaret Burnett, a professor at Oregon State University’s School of Electrical Engineering and Computer Science, got even more specific, telling Bloomberg that a lack of diversity skews inferences towards the biases of the majority group. She suggests the bias of affluent white males may have led to an embarrassing incident last summer where a Google photo app started tagging black people as ‘gorillas.’ “If un-diverse stuff goes in, then closed-minded, inside-the-box, not-very-good results come out.”

That may also have been at work when Twitter users decided to hijack Microsoft’s ill-fated chatbot Tay, making her parrot back racist comments (and other inappropriate things). But ironically, the team who developed Tay was led by a female researcher named Lili Cheng. For 20 years she’s worked on human-to-machine interfaces for Microsoft, and she says all her projects “tended to be social, interactive, high risk, and ambiguous,” according to a recent profile in Co.Design. But while she looks towards the future, her comments to Bloomberg suggests that she’s still worrying about the same issues today.

“The industry as a whole, ourselves included, need to do a better job of classifying gender and other diversity signals in training data sets.”


WebReduce

Feature Image: New Old Stock.

A newsletter digest of the week’s most important stories & analyses.