Instagram will now start to warn users before they post a “potentially offensive” caption, allowing them to reconsider posting.
Instagram has made the safety of all users on its platform a priority. This week, the company announced it is rolling out an AI-powered tool that analyzes captions in real-time, before they are published, and warns users that they may be offensive.
The implementation is simple, automatically generating a notification to let users know that their caption “looks similar to others that have been reported.”
Instagram will not punish you for posting “potentially offensive” captions. It will encourage you to reconsider, and edit the caption, however, it will also give you the option to post it as is.
The new feature is built on the same AI that the company introduced for comments back in July.
Instagram says the new feature is rolling out in “select countries” for now, but it will expand globally over the coming months.
You might also like
More from Instagram
Facebook is rolling out new features on Messenger and Instagram to allow more self-expression and help people feel together even …
A new sticker automatically transcribes speech in videos so you can automatically caption your Instagram Stories.
Facebook is making it easier for businesses to manage their presence across apps, letting them schedule Facebook and Instagram Stories.
Instagram like counts, should we show them or hide them? Instagram cannot decide, so now they will ask us to …
After launching IGTV ads in the US last May, Instagram is now expanding the feature to select creators in the …
Facebook confirmed reports it is building an “Instagram for Kids” for users under the minimum age for its flagship app.
Instagram is introducing new policies and features to limit interactions between teenagers on its platform and adults they don't follow.