At QCon: Why Generative AI Is Harmful to Earth and Society
LONDON — “My views are my own, as are my biases.” That’s how Leslie Miley, investor, ex-Googler, and former CTO of the Obama Foundation, kicked off his QCon London keynote. But can the same be said for generative artificial intelligence (AI)? Not likely, as collective biases are baked in at scale, influencing everyone’s views.
There’s this seemingly unstoppable growth around generative AI right now. If it keeps going unchecked, it will have devastating effects both on the Earth and the people living on it. But, as Thoughtworks’ CTO Rebecca Parsons pointed out later in the day, “we suffer inconsistently.”
Miley compared this growth to the greatest public works project in U.S. history — the interstate highway system. These veins that connected the country relied heavily on eminent domain and inconsistently affected neighborhoods, which, 70 years later — from redlining to devastating health conditions as a result of pollution — are still reaping the consequences.
History repeats itself now, he suggested, as our endless demand for data centers deprives disproportionately Black and brown communities of potable water and inconsistently increases the effects of the climate crisis.
Yet the tech industry doesn’t seem to be doing much more than simply acknowledging the problem. And Big Tech’s heavy reliance on carbon offsetting isn’t enough.
“As we come up with generative AI, as we push this out to people, we’re like, ‘Yes, we know it’s going to emit more carbon.’ We’re going to make it more efficient,” Miley said. “It’s creating the problem but then turning around and saying they’re going to solve the problem. But as they solve the problem, the problem hits people’s communities.”
Miley addressed QCon’s audience with a call on the industry to recognize the error of its ways and to actually do something about it.
The Data Center Demands of Generative AI
Generative AI is astonishingly energy-intensive, which means that, as we enter the ChatGPT Era, we’re facing a seemingly endless demand for data centers. This could be three times the power density of conventional data processing, Miley said.
And with more data centers and more data processing comes exponentially more cooling needs — which at a hyperscaler data center can use up to 19 million liters of water a day. “And that’s water that can’t be returned, in many cases to potable water,” he said. “It’s evaporated. It becomes wastewater and it gets discharged.”
“I think about communities that don’t have access to water or have limited access to [clean] water,” Miley said. “And we have data centers that are taking clean water and using it in tens of millions of liters on a daily basis. And we’re doing that so that you can generate pictures, so that you can have a conversation with an AI, so you can play around.”
Like the U.S. highway system, data center construction is set to create thousands or even hundreds of thousands of jobs in a short time period. And like this transformative public work, data center construction is already displacing people (and indigenous religious sites and historic Black cemeteries), halting new builds in areas of high housing demand, dramatically increasing carbon footprints and noise pollution, permanently changing local environments, and destroying natural resources, to name a few.
Miley lives in the Bayview neighborhood of San Francisco, which sits between the 280 and 101 freeways and a Naval Superfund site, where 86% of children develop severe asthma before kindergarten — while only one in 12 of American kids of all ages have the disease. He and his neighbors experience the effects of racially influenced infrastructure decisions to this day.
In fact, Santa Clara County, also known as Silicon Valley, has more superfund sites than any other county in the U.S., making it one of the most polluted places in the U.S., as well as home to some of the greatest inequality.
While the interstate system was knowingly paved with racist intentions, with AI, “we’re building roads for new industry, for new commerce, for new ways to communicate for new ways to work. And when you do that, you have to build it with intentions,” Miley said. But the tech industry really hasn’t set out its intentions yet.
“Capitalism is consistent. It consistently extracts as much as it can from a system with no regard to the health of that system, and then moves on. The federal highway system, built all these roads, put them in and then moved on. Neighborhoods were destroyed. Communities were upset. Social order was changed. But we all drive on these roads and we don’t care,” he observed.
And that’s what society is doing with AI. Except at an even bigger scale.
Climate change “floods the Central Valley where a great deal of the fruits and vegetables in the United States are actually grown,” Miley said. The farm workers in that region, who are often migrants, he said, suffer the impact of such flooding, losing their homes and income.
“And part of this is because we create these systems, we create these technologies that push more CO2 into the atmosphere,” he added. “Then we say we’re going to fix it later. But how do you fix someone who doesn’t have a home anymore because of the climate crisis? How do you fix someone who doesn’t have a job to go to anymore?”
On Fixing AI Bias ‘Later’
“We know that AI is biased. We know the data, we train it with this bias. But what do we say? We’ll fix it later,” Miley told the audience. “When is later? When you’ve made your money? When … you’ve had your exit? Is that later? The time is now to look at how you’re training your data.”
This AI bias isn’t just found in generative chatbots. It’s integrated into health care AI, hiring and recruitment software, and sentencing recommendations to elected judges. AI bias is bias at a global scale.
“This technology has the potential to not just incorporate our biases, but amplify them on a stage that we’ve never seen before.”
— Leslie Miley, @Shaft
As a recent example of active harm driven by AI, Miley referenced how the clothing manufacturer Levi’s has announced that it’s planning to use AI-generated models to “increase diversity,” a move that will deny income to human models from marginalized groups. The move aims to sell more jeans, profiting off the images of Black and brown models who Lalaland.ai, the virtual model generator, trained.
Add to this another harmful trend that AI, as with the new TikTok “beauty” filters, tends to anglicize, meaning that these AI-generated avatars will likely not even be representative of real Black and brown models.
We can lump these active harms into what Miley calls “cultural appropriation via prompt engineering.” But, he continued, “It’s hard to stop this train because everyone wants to get paid.”
Add to this, the Big Tech layoffs over the past several months seem to favor cutting AI ethics staff.
Mitigation Strategies against AI Harm
“As we learn more about how to deploy these models, and train and use training data, we have a responsibility to not just do it but to do it intentionally and to do it with good intentions,” Miley said.
Being a keynote speaker at a big tech conference, he had to end with some action points, and a call to action for everyone in the room and beyond:
- Break your training models up. Aim for smaller models, because bigger doesn’t necessarily mean better.
- Always train with the societal context. Parsons also offered Thoughtworks’ Responsible Tech playbook, which includes many open sourced exercises to help consider impact.
- Shift workload timings for when you run models, so as not to risk power outages. Or leverage sustainable energy, like solar.
- Measure and publish your cloud carbon output with tools like the Emissions Impact Dashboard for Azure, Google Cloud Carbon Footprint and Amazon Web Services’ Customer Carbon Footprint Tool.
- Always know where your data is running — know your data centers. Don’t just consider their impact on the environment but, Miley reminded, their impact on society and different cultures.
- Educate yourself on AI bias. He pointed to Timnit Gebru and Joy Buolamwini to follow to get started on this self-education journey.
“I don’t do tech because it’s cool or because of the money. I do it because I thought it would help people,” Miley said. “I really did want to be in tech to change the world for the better. Not just to change the world, but to change the world for the better.”
Cheap Labor in the Loop
The creation of generative AI includes human beings in its production loop. Time magazine revealed in January that OpenAI, the creator of the ChatGPT AI-backed chatbot, paid Kenyan workers less than $2 a day in order to make ChatGPT less toxic.
“The work was vital for OpenAI. ChatGPT’s predecessor, GPT-3, had already shown an impressive ability to string sentences together. But it was a difficult sell, as the app was also prone to blurting out violent, sexist and racist remarks. This is because the AI had been trained on hundreds of billions of words scraped from the internet—a vast repository of human language,” wrote Billy Perrigo, in the investigative report.
Trained on the internet, earlier versions of ChatGPT reflected its source material’s rampant toxicity, which isn’t good for mainstream adoption. So, Perrigo reported, the OpenAI team followed social media companies’ example by identifying and feeding examples of harmful language to an AI application, to then recognize and remove.
“To get those labels, OpenAI sent tens of thousands of snippets of text to an outsourcing firm in Kenya, beginning in November 2021,” Perrigo wrote, “Much of that text appeared to have been pulled from the darkest recesses of the internet. Some of it described situations in graphic detail like child sexual abuse, bestiality, murder, suicide, torture, self-harm, and incest.”
This revelation is reflective of the active harms our industry can create. Like cajoling minoritized colleagues to do the diversity education work, the industry was again relying on unfair labor to “fix” its diversity problems. And paying them less than $2 an hour to read the worst of the worst.
Miley’s talk and Time’s report show us that the work the tech industry has done since the social justice reckoning of 2020 has been largely performative. But it’s also a reminder that while AI growth is at a time of exponential scaling, that growth can still be shaped by human intervention. The tech industry has a responsibility to intervene now, not just let it happen.
“You cannot name a technology that has been transformative that has not damaged the most vulnerable parts of society, that culture is not destroyed,” Miley said. But at AI’s growth rate, it feels like it could do potentially more damage to the Earth, and to its inhabitants.
“If you don’t think this is a problem, if you don’t think that it’s something that you can do something about, I think you need to maybe not do this as a living,” Miley said. “And I say that because it’s not going to get better unless we make it better.”
Author’s Note: Miley admitted to leveraging ChatGPT in research for his talk. I used OtterAI to record and transcribe his talk, as well as the AI in Google Photos that reads text of photos I took of some of his slides. There isn’t an easy answer for this, but we should be asking more questions.