Instagram will now start to warn users before they post a “potentially offensive” caption, allowing them to reconsider posting.
Instagram has made the safety of all users on its platform a priority. This week, the company announced it is rolling out an AI-powered tool that analyzes captions in real-time, before they are published, and warns users that they may be offensive.
The implementation is simple, automatically generating a notification to let users know that their caption “looks similar to others that have been reported.”
Instagram will not punish you for posting “potentially offensive” captions. It will encourage you to reconsider, and edit the caption, however, it will also give you the option to post it as is.
The new feature is built on the same AI that the company introduced for comments back in July.
Instagram says the new feature is rolling out in “select countries” for now, but it will expand globally over the coming months.
You might also like
More from Instagram
Instagram has announced new measures against bad behavior on its platform, including removing accounts that send abusive DMs.
Username hacking consists of stealing rare and coveted usernames on platforms like Instagram, and then sell them for a profit.
Instagram has rolled out a new Content Publishing API that will support scheduling and publishing single photo or video posts.
The Instagram Professional Dashboard is a single destination to help track your performance and access professional tools.
Facebook has temporarily disabled some Messenger and Instagram features in response to new rules for messaging services in Europe.