Following recent controversy, Facebook is planning to use AI to monitor and automatically flag offensive content on the platform.
Facebook faced a few controversies recently. From removing an iconic Vietnam War photo due to nudity, to allowing the live broadcast of extremely violent footage, content moderation has caused outcry across the web. Add to this the current issue of fake news spreading, Facebook ought to react fast.
Historically, Facebook relied on users to report offensive posts. But a recent announcement made by Joaquin Candela, Facebook’s director of applied machine learning, confirmed that the company is now working on a way to use AI and applied machine learning to better moderate content on its platform:
And Facebook Live is the number one target. Live content is notoriously very hard to moderate, and Facebook hopes AI will help to monitor it and react automatically.
Machine-based moderation is still at the research stage however. Candela explained that they are currently facing two main issues: first, the computer vision algorithm has to be extremely fast. Still, Candela seems quite confident they can make it work. The second issue concerns the escalation process and teaching the algorithm to prioritise in the right way, so that it can be put in front of the right person if necessary, or if the machine is not able to fully understand context.
One way or another, the future of Facebook passes through machine learning.
More from Facebook
In honour of World Blood Donor day, Facebook has announced the launch of Blood Donations on its platform, helping people …
Facebook starts providing read time estimates for Instant Articles, in a move to increase transparency and improve user experience.