Culture /

Can YouTube’s Algorithms Identify Safe-for-Children Videos?

12 Nov 2017 6:00am, by

Google’s YouTube is confronting the ultimate scaling problem. Every minute of every day, 400 hours of content are uploaded to the service, according to a recent article in the New York Times. To weed out inappropriate videos, YouTube has been trying to augment its team of human reviewers with video-judging algorithms. But this week the company also took new steps to eliminate more of the incentives for creating inappropriate videos — suggesting that Google’s algorithms are confronting somebody else’s algos — with poor humans caught in the middle.

YouTube’s actions were triggered by new reports of wildly inappropriate videos finding their way into the YouTube Kids app for children. Back in March the BBC had discovered YouTube was hosting hundreds of disturbingly violent videos that misappropriated popular characters from children’s entertainment, adding that “The YouTube Kids app filters out most — but not all — of the disturbing videos.” At the time YouTube had responded that “We appreciate people drawing problematic content to our attention, and make it easy for anyone to flag a video.” YouTube pointed out that flagged videos are removed “within hours,” adding that “We take feedback very seriously.”

But last week the New York Times reported that the violent videos on YouTube were still finding their way into the YouTube Kids app, and sometimes winding up in front of young children — to the horror of their parents. The Times called it “another example of the potential for abuse on digital media platforms that rely on computer algorithms, rather than humans, to police the content that appears in front of people — in this case, very young people.”

So Thursday YouTube announced another crucial new tweak: flagged videos will soon instantly disappear from the YouTube Kids app (rather than lingering on the service waiting for a review). And children searching for videos won’t find any flagged videos unless they’re logged into an account registered to a user over the age of 18.

YouTube says they’re “in the process” of implementing this fix.

The company hopes this will stop inappropriate videos from appearing on its kid-friendly app, according to The Verge. “YouTube says it typically takes at least a few days for content to make its way from YouTube proper to YouTube Kids, and the hope is that within that window, users will flag anything potentially disturbing to children.”

As an added step, the company is taking steps to remove the incentives for creating violent videos. Last summer YouTube added a new section to its content guidelines warning that videos were “not suitable for advertising” if they contained “Inappropriate use of family entertainment characters,” whether animated or live action, “even if done for comedic or satirical purposes.” This week YouTube strengthened that policy, stopping videos from earning any ad money the second a user flags them for review.

All this activity points to a sad truth. Besides its own internal employees — and its algorithms  — YouTube relies on volunteer content reviewers and the reports of regular YouTube users to help identify videos that shouldn’t be shown to kids. So in effect, at least some of the reviewing has been crowdsourced.

But does it scale?

Algorithms also screen uploaded videos for copyright violations, according to a recent article in the Mercury News, and YouTube brags that 98 percent of the time it can automatically identify copyrighted material.

Other companies are also dealing with scale issues. SnapChat has 178 million users, many of them teenagers, and this week Bloomberg reported that the company is using both human employees “and automated systems” to try to protect its young users from adult predators (though “it wouldn’t provide details.”) And Facebook is testing a program in Australia that will let users stop others from distributing their intimate photos on Facebook and Instagram — if the users themselves deliver those photos to Facebook. For each photo a hash — basically a numeric fingerprint — will be entered into a database, so that Facebook will know which photos to stop others from distributing.

But sometimes tech companies find their automation has to battle automation from other bad actors, according to a recent essay in Slate. “Whenever you find an algorithm making high-stakes decisions with minimal human supervision — that is, decisions that determine whose content is widely viewed, and therefore who makes money — you will find cottage industries of entrepreneurs devising ever subtler ways to game it.”

Last year Amazon discovered an author had created 83,999 fake Amazon accounts — tracked by a massive database hosted on Microsoft Azure — and was using them to fool Amazon’s algorithms into featuring his ebooks at the top of their best-seller lists. And back in 2012, German artists even tried uploading computer-generated books into Amazon’s Kindle Store consisting of nothing but comments that they’d cut-and-pasted from YouTube. The incident called attention to the fact that Amazon apparently wasn’t actually reading self-published books before offering them to the general public. Like YouTube, Amazon swiftly removed the problematic material — but only after its users discovered it and brought it to Amazon’s attention.

Whatever the solution, it seems inevitable that automation will be involved.

“The problem cannot be solved by humans and it shouldn’t be solved by humans,” Google’s business chief Philipp Schindler told Bloomberg back in April. This year major advertisers began realizing that their brands were appearing on extremist videos with hate speech, and the resulting uproar caused Google to take a closer look at how they reviewed videos uploaded by users. “We switched to a completely new generation of our latest and greatest machine-learning models,” Schindler told Bloomberg, adding “We had not deployed it to this problem because it was a tiny, tiny problem. We have limited resources.

And now one Chinese firm is marketing a new-and-improved automated screener, which claims to offer real-time recognition of forbidden content using cloud-based deep learning and AI to other Chinese firms. The company’s founder says they’ve already screened over 100 billion images — and now handles more than 900 million images every day.

A new essay on Medium delved into the implications of it all. “We have built a world which operates at scale, where human oversight is simply impossible,” wrote technology journalist James Bridle. Yet there’s still disturbing children’s videos on YouTube, Bridle writes, and “no manner of inhuman oversight will counter most of the examples I’ve used in this essay.” He points out it’s not just about violent children’s video; there’s the same issue with misinformation and conspiracy theories, as well as violence and hate speech. “What concerns me is that this is just one aspect of a kind of infrastructural violence being done to all of us, all of the time, and we’re still struggling to find a way to even talk about it, to describe its mechanisms and its actions and its effects.”

And Slate says the whole episode highlights “the problem with Silicon Valley’s playthings,” raising a much larger question. “Can anything control the massive platforms that now shape our lives?” An essay by its senior technology editor warns that many crucial decisions are now already being handled by algorithms, “And those algorithms, we’re gradually learning, are not always worthy of our trust.”

Instead of YouTube, we could just as easily being looking at Google, Facebook, Spotify, Amazon, or Netflix. “All have taken tasks once done by humans (librarians, scrapbookers, DJs, retail clerks, video-store managers—and, let’s not forget, advertising salespeople) and found ways to do them automatically, instantly, and at close to zero marginal cost. As a result, they’re taking over the world, and making enormous profits in the process,” he wrote.

While there’s signs that tech companies recognize that they’re losing our trust, “So far, however, there are no signs that it’s solvable.” He concludes that these massively popular platforms are too large to monitor effectively. “Their whole businesses are built on the premise that algorithms can make decisions on a scale, and at a speed, that humans could never match. Now they’re pledging to fix those algorithms’ flaws with a few thousand contractors here or there. The numbers don’t add up.”

What’s the solution? One comment on the New York Times story suggested the PBS Kids app, which only shows educational videos produced by PBS. Another comment suggested an even more educational activity for kids. “Take ‘em to the library, check out a bunch of age-appropriate reading or picture books, and begin a love of reading that will hopefully last a lifetime.”

But my favorite response suggested letting kids be entertained by the real world. “An easy solution to this is to turn off the screen. Seriously. My son is 4 and gets ‘screen time’ for an hour and a half on Sundays (usually a movie or an episode or two of a show he likes that we approve of). My daughter is a year and gets no screen time.

“Is it really that hard for parents to just not turn it on?”


WebReduce

Google is a sponsor of The New Stack.

Feature image: Peppa Pig Crying at the Dentist Doctor Pull Teeth! 


A digest of the week’s most important stories & analyses.

View / Add Comments

Please stay on topic and be respectful of others. Review our Terms of Use.