In a bid to address the growing problem of “synthetic and manipulated media,” Twitter has shared a first draft policy and is asking for public feedback on it.
In this day and age, it’s important to be able to have a clear view and context about all the content that you’re viewing and engaging with on social media. But social media is rife with misinformation and manipulation, and that’s becoming a big problem. One growing issue is “synthetic and manipulated media,” also known as deepfakes.
To address the growing problem that deepfakes are starting to pose, misleading or confusing people, and undermining “the integrity of the conversation,” Twitter last month announced a plan to both define and tackle it. At the time, the company sought public input on new rules, and now based on this input, is releasing the first draft of its policy on synthetic and manipulated media.
Synthetic and manipulated media defined
Based on its initial conversations, Twitter proposes to define “synthetic and manipulated media as any photo, audio, or video that has been significantly altered or fabricated in a way that intends to mislead people or changes its original meaning.” These are “sometimes referred to as deepfakes or shallowfakes.”
What Twitter will do when it sees synthetic and manipulated media
- Place a notice next to Tweets that share synthetic or manipulated media;
- warn people before they share or like Tweets with synthetic or manipulated media; or
- add a link – for example, to a news article or Twitter Moment – so that people can read more about why various sources believe the media is synthetic or manipulated.
Furthermore, if a Tweet that includes a deepfake that “could threaten someone’s physical safety or lead to other serious harm,” is identified, then Twitter may remove it.
What you can do
Twitter has prepared a survey to fill out – available in English, Hindi, Arabic, Spanish, Portuguese, and Japanese – and provide feedback on the draft policy. However, if, for any reason, you prefer to Tweet your input, you can simply use the #TwitterPolicyFeedback hashtag.
Finally, if you’d like to partner with Twitter to help develop solutions for the detection of synthetic and manipulated media, you can also fill out this form.
As Del Harvey, VP of Trust & Safety, at Twitter explains in a recent post, “the feedback period will close on Wednesday, Nov. 27 at 11:59 p.m. GMT. At that point, we’ll review the input we’ve received, make adjustments, and begin the process of incorporating the policy into the Twitter Rules, as well as train our enforcement teams on how to handle this content. We will make another announcement at least 30 days before the policy goes into effect.”
More from Twitter
During a presentation at CES, Twitter revealed a new feature under development that will let users define the organic audience …