We’ve said it before and we’ll say it again: YouTube has a serious problem with inappropriate content. The platform continues to lose advertisers because of it, so now it’s scrambling to fix things.
YouTube has been criticised heavily for failing to remove inappropriate content from its platform, and especially on YouTube Kids. Because of its inability to fix the problem, big advertisers have been withdrawing their advertising budgets, leaving the company with two big questions: “how to police content more effectively” and “how to win back advertiser trust.” The latter may, of course, take care of itself if YouTube can prove that it’s sorted out the former. Either way, here’s what YouTube is doing to fix things.
The first is to increase the actual human reviewers who remove content and train YouTube‘s machine learning systems. Its current team has been able to manually review 2 million videos for violent extremist content since June. This is obviously far from enough. As such, in 2018, the total number of people across Google who will be reviewing content will rise to over 10,000. In the same vein, the company will be expanding the network of academics, industry groups, and experts who work on helping it better understand issues.
The second is to use machine learning more widely to flag and remove content that contravenes its guidelines. Using this technology, YouTube has been able to remove over 150,000 videos for violent extremism since June 2017. The technology has helped YouTube’s human reviewers remove around five times more videos than before. In fact, 98% of videos that are removed from the platform for violent extremist content are flagged by algorithms.
Since June, that’s the equivalent of 180,000 people working 40 hours per week. Speed is of the essence here, as YouTube tries to take down content as fast as possible. Currently, machine learning allows the platform to identify and take down almost 70% of violent extremist content within 8 hours and nearly 50% in 2 hours.
Positive results with regards to violent extremism content have led YouTube to extend its machine learning to other areas like child safety and hate speech as well.
In addition to the above, YouTube is pushing for greater transparency on how it tackles problematic content. In 2018 the company will issue regular reports providing information about flags it receives and the actions it takes to remove videos and comments.
Finally, YouTube is taking more actions that will protect advertisers and creators. Advertisers need to know that their ads run next to content that reflects their ideals and brand values. At the same time, creators need to feel confident that their revenue is protected from “bad actors.” These considerations have led to YouTube applying “stricter criteria,” conducting “more manual curation,” but also growing its team of ad reviewers to ensure that ads are running where they should.
More from Youtube
YouTube may have actually cleared Logan Paul's recent video of a suicide victim in Japan, and allowed it to be …