In response to criticism about efforts to combat hate speech, Facebook says that hate speech prevalence has dropped to 5 views per 10,000.
In a recent Newsroom post, Facebook asserts that hate speech prevalence is down by almost 50% on its platform in the last three quarters.
According to the company, “prevalence” is used as the most important metric to analyze the magnitude of hate speech on its platform “because it shows how much hate speech is actually seen…”
According to Facebook’s recent Community Standards Enforcement Report, the prevalence of hate speech on the platform is about 0.05% of all content viewed. In other words, out of every 10,000 views, five will include hate speech.
The company claims that the technology it uses to reduce the amount of hate speech users see on Facebook effectively picks up policy violations and directs it to reviewers for manual checking.
It also reduces the distribution of content that could likely be violative and the visibility of people and groups of pages that may have recurrent hate speech violations.
Facebook attributes its success in reducing the prevalence of hate speech to this same technology – demonstrating that since it was introduced in 2016, the percentage of content removed by the system without relying on users reports went from 23.6% to over 97% today.
This means that Facebook is now proactively detecting and removing hate speech content before people report violations.
Another important element of its prevention technology is its ability to reduce the distribution of problematic content. When unsure whether content is hate speech or simply a user, for example, describing a negative experience, the system may reduce the content’s distribution.
In addition, when policy or guidelines violations frequently occur within Groups, Pages, or even personal accounts, the system will not recommend them as much. Facebook also uses technology to flag content for further review.
According to Facebook, its threshold for automatically removing content is “high.” This is set so to avoid making assessment mistakes on content that may look like hate speech but isn’t. The company sees such lapses in judgment as “harming” the very people they are trying to protect.
Lastly, Facebook shares that it has been working with international experts to develop its metrics further, and claims to be the only company that has volunteered to have these independently audited to ensure it is measuring and reporting them accurately.
You might also like
More from Facebook
Meta has introduced the Facebook Reels API, a solution allowing developers to build a 'share to reels' option into their …