Twitter is running a limited test on iOS that asks users to think again and edit a reply that might contain harmful language, before tweeting it.
If you tend to regret many of the things you say or write when you’re angry, then you will already know that it’s probably a good idea to take a step back and rethink things before writing something that can come back and harm you. To help you do this, Twitter is testing a new feature that gives you a second chance before posting a reply.
The new feature is part of Twitter’s efforts to bring down the amount of harassment on its platform and is probably the closest you’ll get to an edit button – or a second chance before severely harming a relationship or offending people. If you can’t be trusted to say anything civil when angry, this might be just the feature you’ve been waiting for.
It doesn’t work if you hit reply straight away, though. You will get a prompt to rethink your “harmful” language beforehand; all you have to do is edit your reply before sending it.
Twitter announced the test of the new feature in a Tweet earlier this week, explaining:
“When things get heated, you may say things you don’t mean. To let you rethink a reply, we’re running a limited experiment on iOS with a prompt that gives you the option to revise your reply before it’s published if it uses language that could be harmful.”
When things get heated, you may say things you don't mean. To let you rethink a reply, we’re running a limited experiment on iOS with a prompt that gives you the option to revise your reply before it’s published if it uses language that could be harmful.
— Twitter Support (@TwitterSupport) May 5, 2020
It’s a good idea, actually, but the execution may be a bit flawed, in the sense that it doesn’t work against harassment that is premeditated. And I would think that most harassment falls in this category. I would argue that it only works against “heat of the moment” situations, where an otherwise level-headed and well-meaning person lets anger get the better of them.
One can’t complain, though, as it’s certainly a step in the right direction. How effective it will be in the long run, is unknown. While it’s at it, though, Twitter should probably do the same for Tweets – not just replies.
Last year, Instagram started a test for a similar feature, giving users a warning before leaving a comment that might be offensive. In a blog post late last year, Instagram called the results of the test “promising,” and found that “these types of nudges can encourage people to reconsider their words when given a chance.”
You might also like
More from Twitter
Twitter announced it's testing a new ad format in Tweet conversations, serving ads after the first, third, or eighth reply …
Businesses in the US can now apply for a Professional Account on Twitter, giving them the ability to grow and strengthen …
Twitter is finally fixing its frustratingly low video quality, but the upgrade won't apply to previously uploaded videos.
After testing, Twitter is rolling out its Tips feature on iOS, allowing users to also tip creators in Bitcoin via …
Twitter has launched "Communities," a new feature that appears to be a mix between Facebook Groups and Reddit.