Have you ever wondered just how much “bad stuff” goes up on Facebook, and how the platform enforces its Community Standards? Well, now you can take a good look at the numbers. Published for the first time ever, they are a real eye-opener.
There’s a lot of “bad stuff” out there on Facebook, and the sheer size of the platform makes it pretty hard to moderate what stays up and what comes down. Reviewers often have just a few seconds to decide, and that’s where the problem lies a lot of the time. Facebook has simply gotten way too big for its own good and finds it really hard to enforce its Community Standards. However, it is enforcing, and it has the numbers to prove it.
Three weeks ago, the company published its internal standards used to decide whether something stays up or is taken down; now, for the first time ever, it has published its enforcement of these standards, in numbers. The report covering Facebook‘s enforcement efforts between October 2017 to March 2018 covers six areas that are also part of its Community Standards for easy reference: graphic violence, adult nudity and sexual activity, terrorist propaganda, hate speech, spam, and fake accounts.
The numbers in the report will show you how much content people saw that violates those standards; how much content Facebook removed; and how much content it detected proactively using AI (before people reported it).
Most of the action it took to remove the “bad stuff” involved spam. In fact, Facebook took down 837 million pieces of spam in Q1 2018 — nearly 100% of which was found and flagged before anyone on the platform was able to report it. Facebook also disabled about 583 million fake accounts which spread the spam in question. Most of these were “disabled within minutes of registration.” These fake accounts join the millions that Facebook prevents from registering for its platform. Still, Facebook estimates that “around 3 to 4% of the active Facebook accounts on the site during this period were still fake.”
In Q1 2018, also took down “21 million pieces of adult nudity and sexual activity” — 96% of which was found and flagged by AI before being flagged. The company estimates that out of every 10,000 pieces of content viewed on its platform, “7 to 9 views were of content that violated our adult nudity and pornography standards.”
In the same period, Facebook removed 3.5 million pieces of violent content — 86% of which was identified by AI before users reported it. In terms of hate speech, Facebook says its “technology still doesn’t work that well” and still needs to be checked by review teams. Yet, it removed 2.5 million pieces of hate speech in Q1 2018 — 38% of which was flagged by its “technology.”
You can access the first Community Standards Enforcement Report here.
More from Facebook
Facebook Launches New Ad Reporting Tools For Advertisers
Facebook is adding several new ad reporting tools to help social media marketers' better measure their holiday campaign performance.
Facebook Now Lets You Transfer Photos And Videos To Google Photos
Facebook has announced its first step into data portability with a new tool that allows you to transfer your photos …
Facebook Expands Crisis Response Tool To WhatsApp
Facebook users looking to offer or request help after a natural disaster can now also do so via WhatsApp as …