Instagram will now start to warn users before they post a “potentially offensive” caption, allowing them to reconsider posting.
Instagram has made the safety of all users on its platform a priority. This week, the company announced it is rolling out an AI-powered tool that analyzes captions in real-time, before they are published, and warns users that they may be offensive.
The implementation is simple, automatically generating a notification to let users know that their caption “looks similar to others that have been reported.”
Instagram will not punish you for posting “potentially offensive” captions. It will encourage you to reconsider, and edit the caption, however, it will also give you the option to post it as is.
The new feature is built on the same AI that the company introduced for comments back in July.
Instagram says the new feature is rolling out in “select countries” for now, but it will expand globally over the coming months.
You might also like
More from Instagram
Instagram is testing a new feature that will allow two users to collaborate on publishing Feed Posts and Reels.
Instagram has allowed translation in 60 languages on posts and captions since 2016. Now it is translating text in Stories …
Instagram announced a new calendar tool with a reporting timeframe doubled from 30 to 60 days; promises to increase to …
Instagram has released a new in-app tool that helps users recover hacked Instagram accounts, directly from the app.
Instagram has confirmed it's finally testing the ability to post images and videos to its platform from desktop.