How Humans React When AIs Replace Them

Some humans are already training the artificial intelligence systems that will take over their jobs, and reporters are wondering if their experiences offer the first glimpses of an exotic new future of diminished human utility that’s awaiting us all. At The New York Times, Daisuke Wakabayashi writes that some companies “are taking the first steps, deploying AI in the workplace — and then asking people to train the AI to be more human.”
In a 2,400 word overview, Wakabayashi interviewed the people who are doing that training, including a travel agent, a customer service representative, an administrative AI’s scriptwriter, and someone who works for the company behind the travel app Lola.
“At an [Lola] employee meeting late last year, the agents debated what it meant to be human, and what a human travel agent could do that a machine couldn’t,” the Times reported.
The newspaper explained that human agents call the hotel after the room is automatically booked by the app to negotiate for upgrades — or to offer their own human-derived recommendations to travelers, which travel agent Rachel Neasham describes as “something AI can’t do.”
But even while Neasham helped customers with requests for hotel bookings, the company’s AI system was silently monitoring her, reports the Times, “watching and learning from every customer interaction.” The results? It identified preferences “that even the customers didn’t realize they had.” (For example, hotels located on corners rather than in the middle of the street).
Neasham says it provoked her human competitiveness, pushing her to find new ways to be valuable to “stay ahead of the AI.” Especially when her company’s app started recommending hotels — and then booking them.
And that defensive competing seems to be a recurring theme. “You can’t program intuition, a gut instinct,” said Sarah Seiwert, a customer representative for the test-preparation service Magoosh. “So the AI might get very intelligent,” she told the Times, “but I hope as a human I continue to get intelligent and not stand at a standstill.” Like Neasham, she also sees the arrival of AIs as a challenge.
Seiwert’s job already involves reading from prepared responses when students contact the company for help preparing for their college exams — only now, it’s the AI that’s suggesting which answers to read. And in an even more startling development, she’s discovered the company’s AI infrastructure is now also suggesting to Seiwert how she should word her follow-up emails to customers, just from analyzing the ways she’d worded them before.
But Seiwert still believes her human intuition will keep her employed, for corner cases like where customers ask unexpected questions like whether or not their account can be extended.
The Times isn’t the only publication that’s looking at workers training their AI replacements. A recent article in Wired described the throng of humans who are reviewing offensive videos on YouTube and AdSense advertisements. And at one point their reporter describes the business case for “churning through ad raters” — since it provides multiple perspectives for the AI that will eventually replace them. “Giving ‘the machine’ more eyes to see is going to better results,” said the CEO of AI startup Nara Logics.
But in the same article Bart Selman, an AI professor at Cornell University, counters that while that’s a good general guideline, “when it comes to ethical judgments it is also known that there are significant biases in most groups.” Or, as Wired’s reporter opines, “if it turns out you’re training your AI mainly on the perceptions of anxious temp workers, they could wind up embedding their own distinct biases in those systems.”
But of course, there’s also some positive possibilities for our AI-enabled future. Travel agent Rachel Neasham told the Times that the speedy bookings by the AI system had freed her up to spend her time doing more creative things. And back at the college prep site, Seiwert also reported something similar — that the AI is speeding up their workflow, making it easier for them to answer questions within 24 hours.
So does this mean the AI revolution won’t be all bad? There actually seemed to be some happy end users, such as the 22-year-old Diane Kim, an AI interaction designer at a New York startup named x.ai, which builds meeting-scheduling assistants.
The Times describes Diane Kim’s job as “part playwright, part programmer, and part linguist.” She writes the text of the emails that the AI sends when scheduling meetings — aiming for polite, professional, friendly and clear. (Putting something on the boss’s “calendar” seemed cold, “and not always appropriately deferential,” so now the AI’s emails say more neutrally that it’s “finding a time”).
But this also gives Kim a perfect view of precious moments like this — when humans first begin mistaking an AI for a person.
https://twitter.com/MikeE_NZ/status/850062367643779073
The Times shares Kim’s reports about the very un-machine-like responses that their AI schedule has received from humans — things like “Hey, can you also help me book a flight” (or a hotel, or a conference room). Some people have even asked the AI for its birthday (rather than, say, its version number) — or asked them out on a date. At the end of the day, scheduling a meeting with as few emails as possible is still a problem requiring a tedious series of logical decision trees, which is where AI really shines.
And Kim reports that sometimes the AI assistants even receive something from the humans that they can never truly appreciate.
A thank-you.
WebReduce
- African bank offers accounts for programmers that have full API access.
- Can we treat addiction using AI and an app?
- How a vigilante used a drone to stop illegal dumping.
- A French AI is now recording its own album of music.
- Two companies want to sell orchard owners robots that pick fruit.
- Wired‘s original executive editor doesn’t believe in “the myth of a superhuman AI.”