A recent study by ProPublica found that Facebook’s human reviewers are lacking in consistency when it comes to enforcing the platform’s rules on hate speech.
Apart from “bad actors” interfering with elections, and the rise of fake news on Facebook, the amount of hate speech and other objectionable material on the platform has also skyrocketed in 2017. The company increased its review staff significantly during 2017, and is now using all means – human or machine – to weed out and remove offensive content. One of the things it’s targeting, is hate speech. And it seems like Facebook isn’t doing a very good job at it.
ProPublica recently ran a study of 900 posts on Facebook and found that there were inconsistencies in how these posts were dealt with or removed. The study reads that “based on this small fraction of Facebook posts, its content reviewers often make different calls on items with similar content, and don’t always abide by the company’s complex guidelines…”
ProPublica’s study also explained that “even when they do follow the rules, racist or sexist language may survive scrutiny because it is not sufficiently derogatory or violent to meet Facebook’s definition of hate speech.”
Facebook accepted that its reviewers had made a mistake in almost half of the 49 posts that ProPublica highlighted in its study. It did, however, defend 19 other examples and excluded 8 due to wrong flags or other information.
Apologising on behalf of the social networking giant, Justin Osofsky, vice president of Global Operations and Media Partnerships, responded by explaining “we’re sorry for the mistakes we have made — they do not reflect the community we want to help build,” and that Facebook “must do better.” As part of Facebook’s commitment to purging the platform from hate speech, Osofsky also promised that the company will be expanding its review staff to 20,000 in 2018.
More from Facebook
Facebook has announced a new way for users to follow breaking news stories and get notifications about how they unfold.