TikTok Now Automatically Removes Violative Content Identified Upon Upload

TikTok is expanding its new automatic content removal system in the US and Canada to identify violative content before anyone sees it.

As part of its effort to maintain a welcoming and safe environment on its platform, TikTok is enhancing its system for the detection and removal of content that violates its guidelines. Over the past year, TikTok has been testing ways to identify and remove violative content, as well as notify creators when their content is removed.

Now, the platform is rolling out the experimental system to its communities in the US and Canada.

Advertisement

Related | You Can Now Find A Job Using TikTok Resumes

The US-based Safety team behind the system, which is responsible for developing and enforcing safety strategies across the US and Canada, seeks to integrate the use of preliminary technology that scans freshly uploaded content for possible violations, and then manually reviews them before the confirmed violative content is removed and the creator notified.

With the new system in place, TikTok will automatically remove specific types of violative content, in addition to removals that are manually confirmed by members of its Safety team.

“Automation will be reserved for content categories where our technology has the highest degree of accuracy, starting with violations of our policies on minor safety, adult nudity and sexual activities, violent and graphic content, and illegal activities and regulated goods,” explains Eric Han, Head of US Safety, TikTok. “While no technology can be completely accurate in moderating content, where decisions often require a high degree of context or nuance, we’ll keep improving the precision of our technology to minimize incorrect removals.”

TikTok is launching the new safety system not only to improve the overall user experience on its platform, but also to offload its safety team by automating most of its work – especially work involving manual review of distressing content – and enabling them to spend more time on less straightforward content moderation workloads, such as bullying and harassment, misinformation, and hateful behavior.

TikTok’s Transparency Report details the workings of the technology, which was initially launched in support of safety support during the COVID-19 pandemic. During this phase of experimentation, results showed TikTok that the error margins of the automatic system were about 5% and requests to appeal a video’s removal have remained consistent.

The platform is also making efforts to make sure that the community understands its guidelines. The notification message sent to those who violate Community Guidelines has been improved to make sure the person behind the upload understands the reason behind the removal – and avoids uploading more violative content.

Furthermore, the Account Updates section of a user’s inbox will display the number of eventual violations, together with the explanation for the removal, and any consequences of the violation. Repeated violations will face penalties and notifications in various parts of the app.

Finally, content with violations such as content depicting child sexual abuse automatically results in an account’s removal, and TikTok is also looking into flagging devices from which the extreme violation was posted to prevent it from creating future accounts.


Advertisement