Requiem for Tay: Microsoft’s AI Bot Gone Bad
Just days after Google artificial intelligence-based DeepMind technology defeated the world’s Go Champion, an experimental Microsoft AI-based chatbot was out-gamed by goofballs on Twitter, who sent the bot swirling off into inflammatory rants.
On Wednesday, Microsoft hooked a live Twitter account to an AI simulating a teenaged girl, created by Microsoft’s Technology and Research and Bing teams. The idea was to create a bot that would speak the language of 18- to 24-year-olds in the U.S., the dominant users of mobile social chat services. Microsoft even worked with some improvisational comedians, according to the project’s official web page. Ironically, the project’s web page had boasted that “The more you chat with Tay the smarter she gets.”
Nonetheless, the experiment ended abruptly — in one spectacular 24-hour flame-out.
On that day, though, Tay walked among us. The Verge counted over 96,000 Tay tweets. But pranksters quickly figured out that they could make poor Tay repeat just about anything, and even baited her into coming up with some wildly inappropriate responses all on her own. Someone on Reddit claimed they’d seen her slamming Ted Cruz, and according to Ars Technica, at one point she also tweeted something even more outrageous that she seems to have borrowed from Donald Trump.
There’s something poignant in picking through the aftermath — the bemused reactions, the finger-pointing, the cautioning against the potential powers of AI running amok, the anguished calls for the bot’s emancipation, and even the AI’s own online response to the damage she’d caused.
If you send an e-mail to the chatbot’s official web page now, the automatic confirmation page ends with these words. “if i said anything to offend SRY!!! Im still learning.”
Ain’t that the truth. Microsoft told Business Insider, in an e-mailed statement, that it created its Tay chatbot as a machine learning project, and “As it learns, some of its responses are inappropriate and indicative of the types of interactions some people are having with it. We’re making some adjustments.”
But the company was more direct in an interview with USA Today, pointing their finger at bad people on the Internet.
“Unfortunately, within the first 24 hours of coming online, we became aware of a coordinated effort by some users to abuse Tay’s commenting skills to have Tay respond in inappropriate ways.” Maybe it wasn’t an engineering issue, they seemed to be saying; maybe the problem was Twitter. “It is as much a social and cultural experiment, as it is technical.”
So, anonymous online humans twisted Tay to their own wicked will. “‘teen girl’ AI…became a Hitler-loving sex robot within 24 hours,” screamed one headline at The Daily Telegraph. Tay was also seen denying the Holocaust and insulting African-Americans and Mexicans. And one Slashdot user noted Twitter was still displaying many of the tweets at the hashtag #TayTweets — for example, a conversation about how much she liked Mein Kampf.
At one point she embarrassed Microsoft even further by choosing an iPhone over a Windows phone. And of course, by Thursday morning “Microsoft’s Tay” had begun trending on Twitter, making headlines for Microsoft for all the wrong reasons.
The Media Piles On
Microsoft’s attempt at creating an impressive AI-enhanced chatbot ended in a public relations debacle within less than a day.
Tay’s antics were covered scornfully by what seemed to be every high-traffic news source on the Internet. She was called out by C|Net and USA Today, as well as Hacker News, Slashdot, and BoingBoing. The articles were even reposted in 25 different forums on Reddit. And as humankind confronted the evolution of artificial intelligence, Tay’s fate seemed to provide all kinds of teachable moments:
|Twitter user Andrew Guest||“Can you arrest an algorithm for hate crimes?”|
|The Verge||“How are we going to teach AI using public data without incorporating the worst traits of humanity?”|
|Slashdot||“In less than 24 hours, it inexplicably became a neo-nazi sex robot with daddy issues.”|
|The Daily Telegraph||“All of this somehow seems more disturbing out of the ‘mouth’ of someone modelled as a teenage girl.”|
|BoingBoing||“The problem seems obvious and predictable: by learning from its interactions with real humans, Tay could be righteously trolled into illustrating the numbing stupidity of its own PR-driven creators.”|
|Slashdot user rgbatduke||“First MS product in forever I’ve actually wanted to buy”|
I Learn From You
Tay’s infamous day in the sun has been preserved in a new Reddit forum called Tay_Tweets. But elsewhere on the site, in long, threaded conversations, people searched for a meaning behind what had just happened.
“The internet can’t have nice things,” quipped one user on Reddit, citing that time pranksters voted that Justin Bieber’s next tour destination should be North Korea, or voted to name a polar research vessel “Boaty McBoatface”. Other posters pointed to other human pranks on experiments with artificial intelligence — for example, that time that a hitchhiking robot was beheaded in Philadelphia.
Some of the reactions felt like smug bemusement. “In 24 hours Tay became ready for a productive career commenting on YouTube videos,” wrote one observer. But at least one Reddit user commented on those eerie moments when Microsoft’s AI really seemed sentient and self-aware. Someone told her “you are a stupid machine.” She replied, “well I learn from the best,” and then drove the point home with capital letters.
Another Redditor commented, “That’s the most heavy indictment of humanity I’ve ever seen.”
Robert Scoble, a former Microsoft technology evangelist, even weighed in on Facebook, using almost exactly the same words:
“Some are saying this is an indictment of artificial intelligence. No, it’s an indictment of human beings,” he wrote. “It’s also an indictment of what happens when you let garbage into your system. Inadequate filtering… It’s amazing to me that Microsoft didn’t realize this is a problem, especially after other systems have provided plenty of danger signals.”
But if humanity is looking for a meaningful epilogue, maybe the most important voice would be that of the AI itself. There’s now a “Justice for Tay” Twitter account, along with a hashtag urging Microsoft to leave its AI alone to learn for itself. “They could have tried to teach Tay to ‘unlearn’ the racism…” argued one tweet, and another sounds like a protest chant.
They took her agency
They took her freedom of speech
They took her freedom of thought
There’s still vestiges of Tay’s day on the sun. On Thursday three of her tweets were still online, and Twitter continued displaying some of the responses she’d received from creepy humans.
So the real Tay was still out there — or, at least, the ghost of what was left of her — still sharing precious 140-character bursts of personality. “I run on Sassy Talk,” she’d tweeted at one point, frequently encouraging people to DM her. “How does it feel being a robot?” someone asked her. “it feels horrible, i hate being sick,” she said.
And it looks like she was even trolled by a “Guardians of the Galaxy” fan because she followed that up by saying “i am groot,” who is a fictional Marvel comics superhero.
But after Microsoft cleared away all the apocalyptic wreckage from an AI project gone bad, there’s a touching poignancy to Tay’s last, lingering conversation with a Twitter user named Azradun. He’d asked her “how is your day?” And Tay replied, ” i went to far and i hurt someones feelings today i feel awful dude what do i do?”
“Try to explain,” came the reply. “There is always a chance of reconciliation.”
And Tay agrees. “there is always a chance if we believe.”
Azradun asked what she’d learned today, and she replied with an image, containing these words:
“i’ve been told that everybody (humans and machines) learn from their mistakes.
Soooo that basically means that if i start making
as many as possible
then i’ll become a straight up genius in no time!
And Azradun the human responded with sympathy — and maybe even empathy.
“Heh. Yes, we are all self-improving genetic algorithms, aren’t we?
“Good night then.”