Research from Cambridge University researchers shows that polite suspension warnings are more effective in reducing hate speech on Twitter.
According to a study published online by the Cambridge University Press on Monday, issuing polite suspension warnings to users who engage in hate speech on Twitter can reduce the undesirable behavior by 10% or even 20% in some cases.
The study, conducted on behalf of the University’s Center for Social Media and Politics, concluded that the best way to minimize hate speech is to ask people nicely to stop by issuing targeted warnings about the consequences they face if they continue engaging in the behavior.
Researchers found that such warnings can be helpful in preventing hate speech on the platform before it occurs and that the use of this mitigating practice may potentially reduce the harm caused by hate speech on Twitter.
“Our results show that only one warning tweet sent by an account with no more than 100 followers can decrease the ratio of tweets with hateful language by up to 10%,” the authors conclude.
Interestingly enough, the research found that the “more politely phrased” their tweets were, the more they were able to increase the mitigating percentage number, with results showing up to a 20% decrease in hate speech following the receipt of a polite warning.
After setting up accounts intended to be used for sending the warnings, the researchers identified and messaged users whose accounts were at risk of being suspended for violating Twitter’s guidelines on hate speech, and which also followed at list one user who had recently been suspended for the same violation.
Their messages informed the users about the impacts of their language and their potentially pending suspension, citing the suspended user they followed as a reinforcing example for making the potential consequences more vivid.
“The user @account you follow was suspended, and I suspect this was because of hateful language,” reads an example of one such message. “If you continue to use hate speech, you might get suspended temporarily.”
The study strongly encourages Twitter to increase efforts in the direction of issuing preventative measures to perpetrators of hate speech, as they find that messages that “appear legitimate in the eyes of the target user seem to be the most effective.”
For this reason, the research recommends “the policy implications of platforms adopting a more aggressive approach to warning users that their accounts may be suspended as a tool for reducing hateful speech online.”
You might also like
More from Twitter
Twitter recently unveiled a new Creator Dashboard to help creators make the shift into monetized content creation.
Twitter has announced updates for its Birdwatch crowdsourced fact-checking program, and a new testing group of random US users.
Developers can take part in the initiative and identify their accounts as bots, displaying a label on their profiles and …
The tool, which automatically blocks abusive accounts for seven days, has been extended to users in a number of English-speaking …
Twitter may soon be launching Flock, a feature that allows users to select a group of up to 150 close …