YouTube, responding to crisis over content, will have 10,000 people addressing problems: CEO

In a tacit admission that its much-hyped artificial intelligence still lags behind humans, Google said it would increase the number of people it has addressing offensive and extremist YouTube content to 10,000.

The Mountain View tech giant has been facing a revolt by advertisers over ads paired with disturbing videos, such as those made by hate groups and religious extremists. Subsequent revelations that YouTube was offering up cartoons featuring what the BBC called “animated violence and graphic toilet humor” added fuel to the controversy.

And it turned out that some cute videos posted by kids were attracting trolls of the worst kind, drawing “hundreds of pedophiliac comments, including encouragement to do lewd acts and links to child-abuse content,” according to the Wall Street Journal.

Get tech news in your inbox weekday mornings. Sign up for the free Good Morning Silicon Valley newsletter.

On Dec. 4, YouTube CEO Susan Wojcicki said in a blog post that the company would continue increasing the number of people addressing problematic YouTube content to more than 10,000 next year. It was not immediately clear whether those would be contract workers or actual Googlers.

To be sure, YouTube is more than a cesspool for haters and kooks, and Wojcicki reminded the world of that.

“Our open platform has been a force for creativity, learning and access to information,” she wrote.

“I’ve seen how activists have used it to advocate for social change, mobilize protests, and document war crimes. I’ve seen how it serves as both an entertainment destination and a video library for the world.

“I’ve seen how it has expanded economic opportunity, allowing small businesses to market and sell their goods across borders. And I’ve seen how it has helped enlighten my children, giving them a bigger, broader understanding of our world and the billions who inhabit it.”

However, Wojcicki wrote, “some bad actors are exploiting our openness to mislead, manipulate, harass or even harm.”

To combat the problem, YouTube has tightened its content policies, added to “enforcement teams” and invested in machine-learning technology, she wrote.

YouTube’s content gatekeepers have a big job — they’ve manually reviewed almost 2 million videos for “violent extremist content” since June, Wojcicki wrote, adding that their work is used in training the machine-learning software.

“Our goal is to stay one step ahead of bad actors, making it harder for policy-violating content to surface or remain on YouTube.”

Still, about 400 hours of video are uploaded to YouTube every minute, so it’s not surprising that the company is hoping some human skills and knowledge rub off on the machine learning algorithms.

 

Photo: A Google data center in Council Bluffs, Iowa (AP Photo/Google, Connie Zhou, File)

 

Tags: , , , , , , , ,

 

Share this Post



 
 
 
 
 
css.php