Twitter is running a limited test on iOS that asks users to think again and edit a reply that might contain harmful language, before tweeting it.
If you tend to regret many of the things you say or write when you’re angry, then you will already know that it’s probably a good idea to take a step back and rethink things before writing something that can come back and harm you. To help you do this, Twitter is testing a new feature that gives you a second chance before posting a reply.
The new feature is part of Twitter’s efforts to bring down the amount of harassment on its platform and is probably the closest you’ll get to an edit button – or a second chance before severely harming a relationship or offending people. If you can’t be trusted to say anything civil when angry, this might be just the feature you’ve been waiting for.
It doesn’t work if you hit reply straight away, though. You will get a prompt to rethink your “harmful” language beforehand; all you have to do is edit your reply before sending it.
Related | Twitter For Mac App Gets Live Timeline Stream
Twitter announced the test of the new feature in a Tweet earlier this week, explaining:
“When things get heated, you may say things you don’t mean. To let you rethink a reply, we’re running a limited experiment on iOS with a prompt that gives you the option to revise your reply before it’s published if it uses language that could be harmful.”
When things get heated, you may say things you don't mean. To let you rethink a reply, we’re running a limited experiment on iOS with a prompt that gives you the option to revise your reply before it’s published if it uses language that could be harmful.
— Twitter Support (@TwitterSupport) May 5, 2020
It’s a good idea, actually, but the execution may be a bit flawed, in the sense that it doesn’t work against harassment that is premeditated. And I would think that most harassment falls in this category. I would argue that it only works against “heat of the moment” situations, where an otherwise level-headed and well-meaning person lets anger get the better of them.
One can’t complain, though, as it’s certainly a step in the right direction. How effective it will be in the long run, is unknown. While it’s at it, though, Twitter should probably do the same for Tweets – not just replies.
Last year, Instagram started a test for a similar feature, giving users a warning before leaving a comment that might be offensive. In a blog post late last year, Instagram called the results of the test “promising,” and found that “these types of nudges can encourage people to reconsider their words when given a chance.”
You might also like
More from Twitter
Twitter Now Publicly Shows Who You Are Paying To Subscribe To
Twitter added a "Subscriptions" button to the user profile that lets anyone see who you are paying to subscribe to. Ever …
Twitter Blue Verified Subscribers Can Now Upload 2-Hour Videos
It probably still won't convince you to sign up, but Twitter Blue subscribers can now upload videos up to 2 …
The Twitter Blue Check Apocalypse Has Begun
Twitter has begun removing blue checkmarks from legacy verified accounts, leaving paid verification as the only way to own a …