Modal Title
Machine Learning / Tech Life

Google Grapples with Ethical AI

When Google fired Timnit Gebru, technical co-lead of its Ethical Artificial Intelligence Team, late last year amid controversy over a paper she was working on other Google researchers and academics, the company possibly  exposed its potential biases, not only of its own management, but potentially of the company's core search service and even of the artificial intelligence field as a whole.
Mar 3rd, 2021 12:43pm by
Featued image for: Google Grapples with Ethical AI

When Google fired Timnit Gebru, technical co-lead of its ethical artificial intelligence team last year, amid controversy over a paper she was co-authoring with academics and other Google researchers, the company unleashed a firestorm of criticism of the bias apparently shown by its own management, its apparent defensiveness about potential flaws in the AI underpinning its core search service and even the role of corporate research in the field of artificial intelligence as a whole. Can Google get comfortable with the necessary but uncomfortable questions research has to raise?

Gebru’s paper asked some possibly uncomfortable questions about very large language models like the ones used by Google search.

These large language models like OpenAI’s GPT-3Microsoft’s Turing-NLG and Google’s BERT can achieve impressive results for understanding and generating natural language using self-supervised learning rather than carefully labeled data sets. But those results can also be shallow or prone to superficial patterns and bias. As a recent attempt to use GPT-3 to create answers to help desk questions showed, the generated answers might be well written, technically correct — and inaccurate; other research shows that depending on the prompt submitted, GPT-3 results can vary “from near chance to near state-of-the-art.”

Questioning the efficacy and ethics of those large models (on which Google search is now largely based), from the environmental costs, to potential biases and harm in how they’re trained and how they’re used, sounds like exactly what Google’s own Ethical AI group should be doing, to comply with the company’s own AI principles. These include being socially beneficial, being accountable, being built and tested for safety and avoiding creating or reinforcing bias.

It’s the kind of work Google set up the group to do in 2018, as questions of fairness in AI became more common.

These days, however, more is at stake.

Creeping Racial Bias

The announcements by Amazon, IBM and Microsoft that they would pause, stop or limit sales of their facial recognition tools came soon after the death of George Floyd in 2020. But those decisions followed two years of research showing gender and racial biases in many facial recognition tools, starting with the Gender Shades paper written by Massachusetts Institute of Technology researcher Joy Buolamwini, founder of the Algorithmic Justice League and Timnit Gebru, co-founder of the nonprofit Black in AI and, at the time, a postdoctoral student at Microsoft Research.

While IBM first tried to improve and then canceled its facial recognition system (which didn’t have a large market share) and Microsoft suggested that the technology should be subject to the same kind of government regulation as cars and medicines while putting limits on who it was available to, Amazon initially argued publicly that the research was inaccurate — while also investing in AI fairness and making changes to the Amazon Rekognition service before temporarily restricting its use. (The more popular vendors who supply facial recognition technology to law enforcement didn’t make changes or commitments.)

“Large datasets based on texts from the Internet overrepresent hegemonic viewpoints and encode biases potentially damaging to marginalized populations.”

—Timnit Gebru, et al

After hiring Gebru as its first Black female research scientist to co-lead the Ethical AI group founded by Margaret Mitchell, who moved from Microsoft Research to Google “to spearhead a new approach to research, where we take a step back for the “bigger picture” where research would be “grounded in human values, the inclusion of diverse experiences, and learning from multiple time points and social movements,” Google now seems to be responding more like Amazon.

As well as the Gender Shades research, both Mitchell and Gebru have been involved in some fundamental work on the safe and ethical use of machine learning, introducing ideas like model cards for reporting on the efficacy of models (think of the nutrition panels on food packaging) and datasheets for data sets (standardizing the documentation of how the data was collected and how it’s intended to be used).

Both women have been recently promoted, but Gebru was forced out at the end of 2020 amid controversy over a paper she was working on with Mitchell and other Google researchers and academics: “On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?Jeff Dean, head of Google’s AI division, said that the paper “ignored too much relevant research” on mitigating the environmental costs and potential harms of large language models but it’s since been accepted to the Conference on Fairness, Accountability and Transparency, which starts later this week.

Dean also suggested that the paper was submitted on too short a deadline to be reviewed by Google, although it had previously been internally approved for publication and then circulated for further internal feedback. In 2020, Google introduced a new review process that involves researchers consulting with legal, policy and PR groups when covering “sensitive topics” that range from China and Iran to COVID-19, recommendation and personalization services or bias in the company’s own services.

Some researchers were also asked to “strike a positive tone” when raising potential issues with Google services and Mitchell publicly questioned whether this would lead to censorship within the company.

There’s been widespread criticism of the way the company handled both the review of the Stochastic Parrots paper and the removal of Gebru. Alphabet/Google CEO Sundar Pichai referred to her as “a prominent Black, female leader with immense talent” but she’s been subjected to significant harassment online after speaking up about how she was treated. Gebru’s team has complained about the process both publicly and internally, a large number of Google employees and academic researchers signed a letter calling for official commitments to research integrity and academic freedom at the company, and two Googlers have already resigned over Gebru’s treatment.

Google’s treatment of AI researchers has an impact beyond any direct unfairness to them (or even the harassment Gebru has received for speaking up about her situation), because the impact of Google’s own research has been so significant (and can even discourage independent academic research in some fields where the budget to explore AI issues can be hard to find).

Fairness and Responsibility

Google is already seeing unfortunate but predictable consequences with a newly launched search tool intended to make it easier to find and support Black-owned businesses during Black History Month; those businesses are now getting spammed with negative and sometimes racist reviews that seem unlikely to come from legitimate customers. That kind of blowback isn’t uncommon with poorly designed diversity efforts and is the kind of thing a diverse AI Ethics team could help with, suggesting safeguards to include or how to replace it with more helpful community-driven efforts.

That’s the kind of team Dean may be hoping to get from the changes announced (along with a less than comprehensive apology for how what he calls “Dr. Gebru’s exit” was handled) that put what Google is now calling “responsible AI” — which includes the Ethical AI team — in the hands of a more senior leader. Marian Croak is a Google vice president with experience in getting companies to embrace technologies that threaten their business model (like VoIP at AT&T) who talks about compromise and the diplomatic approach (but doesn’t have a background in AI research). Croak has also led internal meetings aimed at calming some of the internal backlash.

Google further found itself under scrutiny last month when the company fired Margaret Mitchell, the other lead for the company’s Ethical AI unit.

The firing of Mitchell seems to make Dean’s hopes somewhat problematic, especially as the Ethical AI team didn’t learn about the reorg until after it had been announced to the press.

Mitchell had “tried to raise concerns about race & gender inequity, and speak up about Google’s problematic firing of Dr. Gebru,” she said in a tweet. She’d been locked out of her work account for five weeks while Google investigated what it called “multiple violations of our code of conduct, as well as of our security policies, which included the exfiltration of confidential business-sensitive documents and private data of other employees”; Axios reports that she was looking for evidence of the discrimination and harassment Gebru experienced.

The timing couldn’t have been more unfortunate, since the new policies were also supposed to include “new procedures around potentially sensitive employee exits” as well as tying executive bonuses to progress on diversity and inclusion (a move Microsoft made previously).

Beyond Google

The continuing disquiet in the company comes as AI ethics is moving from something that might have seemed niche or secondary to business interests, to an area where technology companies are investing because getting it wrong will cause them significant problems in the future.

Microsoft’s AETHER (AI, Ethics and Effects in Engineering and Research) committee includes computer scientists and engineers, social scientists, policy experts, lawyers and ethicists. Since 2017, it’s produced tools like InterpretML, an open source toolkit to help developers create AI systems that can explain their decisions, and recommendations from the committee have already prevented “significant sales” to foreign governments and some U.S. law enforcement departments as well as “gating and guiding Microsoft technologies,” according to Microsoft Chief Scientific Officer Eric Horvitz. Twitter just announced the appointment of Rumman Chowdhury (founder of Parity and designer of Accenture’s Fairness Tool) as director of Machine Learning Ethics, Transparency and Accountability.

Google’s own previous attempt to set up an external (unpaid) AI ethics board, the Advanced Technology External Advisory Council (ATEAC), was criticized by researchers including Gebru and Mitchell and was canceled after less than two weeks amid controversy over who had been invited to sit on the board.

Machine learning and AI-driven tools are already in common use in everything from what ads you see in the run-up to an election, to flagging financial transactions for fraud, to what videos Netflix and YouTube suggest you watch next, to who is recommended for bail or wrongly identified in security footage. This is the digital infrastructure that is going to have a significant impact on individuals and society, and while AI techniques are becoming accessible to more developers, many will also use pre-built AI models and services from providers like Google and Microsoft.

Having those tools be responsibly built and equitably applied is going to be critical, and is likely to come under government regulation in more and more countries. The process by which research and AI services are scrutinized is equally under the microscope, especially when they may seem like attempts by the technology industry to avoid official regulation through “self-regulation.”

These roles and teams need to be more than just “AI ethics-washing”; they have to have real power to change tools, products and platforms. That’s something Google will need to demonstrate for its new responsible AI “center of expertise” — which will be much harder to do because of how much trust it’s lost within the AI community. Meredith Whittaker, founder of Google’s Open Research group, co-founder of the AI Now Institute and one of the core organizers of the employee walkouts at Google in 2018 suggested in a tweet that universities, conferences, researchers and developers might want to reconsider their relationships with Google in light of what happened to Gebru and Mitchell.

The organizers for this week’s ACM Conference for Fairness, Accountability, and Transparency have also recently decided that Google won’t be a sponsor, and a group of academic researchers and organizers using the hashtag #RecruitMeNot is asking students to pledge not to accept jobs at Google until the company commits to more accountability and racial justice.”

Group Created with Sketch.
TNS owner Insight Partners is an investor in: turing.
THE NEW STACK UPDATE A newsletter digest of the week’s most important stories & analyses.