What Happens When AI Companies Try to Police Themselves?

This month on YouTube the Harvard Business Review shared its discussion with Harvard Business School professor Tsedal Neeley (recorded last year), as part of a series they describe as “legendary case studies, distilled into podcast form.”
“We are in the midst of an AI revolution,” said the host, Harvard Business School’s chief marketing and communications officer Brian Kenny, while also noting critiques like Stephen Hawking’s warning that it could end the human race and Elon Musk’s quip that “With artificial intelligence, we’re summoning the demon.”
With so much concern, Kenny asks what’s arguably an even more important question. “Whose job is it to make sure that such a vision never comes to pass?”
And then the podcast delves into a case study of what can go wrong when a company tries to police itself, by discussing the case of Dr. Timnit Gebru, a well-known AI researcher who Google fired after she revealed biases in Google’s own AI research.
Checking Your Work
For professor Neeley, Gebru is a long-time acquaintance. When Neeley was a first-year doctoral student at Stanford, she met Gebru, who was at the time a first-year Stanford undergraduate. “And you knew that this woman was going to be special… Timnit is one of those people who sees things clearly. Everyone is talking about AI today, and AI ethics, and AI bias. She was thinking about this over a decade ago.”
Gebru went on to get a Ph.D. in computer science from Stanford, and by 2018 had teamed up with AI researcher Joy Buolamwini from MIT’s Media Lab to analyze facial recognition software from three companies. Their research called attention to a glaring failure. Neeley summarizes it as “the darker the skin tone that people had, the more unlikely it was that faces would be accurately recognized by AI,” noting that Gebru was “one of the first to see it and document it…”
“The clarity by which she saw AI bias issues early on, to me, it just blows my mind. Because everyone talks about it today.”
Neeley said she also learned from Gebru that AI bias “is inextricably tied to DEI [diversity and inclusion]. You cannot separate them. Those who will suffer the consequences of AI will be communities with limited power — and they’re the ones who are least present to help influence the technology, the models that are getting built.”
Neeley underscored the point later. “[A]ny company, any organization, any group interested in digital transformation and bringing AI into their work and using data to create algorithms and models cannot ignore the DEI component.
“And in fact, they need to make sure that they have the right people looking at the work, helping design the work, developing the work because otherwise, flawed humans will create flawed systems.”
Fearlessness
Neeley also sees Gebru calling attention to a larger concern: that the larger a model is, the harder it becomes to eliminate bias. And the same year she began working at Google as the co-lead of Google’s “Ethical AI” research team.
It’s interesting to hear Neeley recount the story of Gebru’s experience at Google — including the way her advocacy was often welcomed. “If she sees someone getting interrupted systematically — a minority person — she would speak up. She would try to improve the culture for women, and for people of color at Google.”
And here Neeley’s real-life meetings with Gebru give the story some context. “She is fearless… That’s one of the questions I asked her: where does it come from, this fearlessness? You speak out, you just are unafraid in ways that are unfamiliar to me. She just has this fire within, and if she sees truth, if she sees something — she’s unafraid to speak up.”
But from a corporation’s perspective, this “doesn’t give you peace, right…? You can imagine how that could be difficult for some portion of an organization, particularly leaders. We don’t like people who agitate.” Neeley describes the culmination as Gebru’s “firing or resignation, depending on which side you’re on.”
It started over a paper Gebru co-wrote with six other researchers (four from Google) on bias in large language models. Gebru told the New York Times that a Google manager had demanded the names of Google employees be removed from the paper. “She refused to do so without further discussion,” the Times reports, “and, in the email sent Tuesday evening, said she would resign after an appropriate amount of time if the company could not explain why it wanted her to retract the paper and answer other concerns.”
The Times quotes parts of the email, in which Gebru complained “Your life starts getting worse when you start advocating for underrepresented people. You start making the other leaders upset. There is no way more documents or more conversations will achieve anything.”
On Twitter, Gebru said Google instead accepted her resignation “immediately, effective today,” writing that “Certain aspects of the email you sent last night to non-management employees in the brain group reflect behavior that is inconsistent with the expectations of a Google manager.”
Or, as Neeley puts it, “on some procedural issues, they eventually ousted her.”
But here Neeley applauded Gebru’s responses on Twitter. “She wanted to make sure that she wouldn’t quietly be fired, and tucked away… If everyone takes a little bit of a risk in speaking out, in even naming names, then over time the aggregate — we’ll be able to protect people in the future.”
Google CEO Sundar Pichai apologized, “acknowledged very publicly what happened, and talked about his regrets of losing one of the top AI experts in the world, who happens to be a Black woman.” There was a concerned letter signed by nine U.S. congresspeople and an angry petition signed by thousands of people (both inside and outside of Google).
Yet Neeley’s real question is: “Was this situation doomed from the start?
“Can you have an AI ethics, an AI bias expert, assessing the technology inside of a company? Or do you need an outsider to ensure that biases are not embedded in your system, and your training mechanisms?”
Freeing Research
Neeley explained there’s a danger that “biases get replicated, duplicated, and scaled exponentially when it comes to communities that are being policed,” summarizing Gebru’s message about biases as “let’s slow down, let’s understand them.” And the problem gets compounded when the model itself is being designed by a homogenous group, Neeley adds.
Working with #ChatGPT is like working with a new team member. We must adapt & learn to maximize its potential. @awsamuel shares key tips for success including transparency, feedback, & caution. Learn more in our book #TheDigitalMindset & this @WSJ article: https://t.co/f42eyHUWap
— Tsedal Neeley (@tsedal) May 30, 2023
Professor Neeley pointed out it was these concerns that led Gebru to co-found the Black in AI research community.
A year after the incident, Gebru founded the Distributed Artificial Intelligence Research Institute (or DAIR). It’s a “space for independent, community-rooted AI research, free from Big Tech’s pervasive influence,” according to its website. The Institute is “rooted in the belief that AI is not inevitable,” according to the site. “[I]ts harms are preventable, and when its production and deployment include diverse perspectives and deliberate processes it can be beneficial.”
Neeley sees another message in the organization: that Gebru “believes clearly that she has to do her work outside of a company, so that it can be independent, and develop research, develop insights, even help other companies with their own reviews without the influence of a given company…
“She’s still figuring out a long-term, sustainable revenue model, but some of her Google colleagues have joined her at DAIR.”
Towards the end of the podcast, host Kenny asked Neely whether a company like Google — or Microsoft — would more readily accept findings from an outside organization? Neeley isn’t sure — but argues that DAIR can generate university-level research with “insights that can be generalized or extrapolated to better understand some of these technologies that are emerging.”
And in the end, Gebru’s vocal advocacy has already brought some changes to the way people think about AI, Neeley said. “When I talk to companies who are trying to build their digital capabilities, who are bringing AI into their systems, and who are building algorithms, I think of Timnit.”
WebReduce
- How the world’s most popular online CS class is turning to AI for help.
- Church leaders ponder the implications of a recent ChatGPT-generated sermon.
- Developer Nicholaus Cranch proposes a new programming system using visual diagrams.
- Early Stack Overflow developer Ben Dumke-von der Ehe remembers the site’s early days.