Content Automatically Removed From TikTok That Breaks Company Policy
TikTok have announced new privacy and security measures that will see some types of content automatically banned.
Anything that shows nudity, any form of sexual activity, violence, alongside other content that breaks TikTok’s safety policy for minors, will be automatically removed. This is made possible by new technology developed by Tiktok that will review videos and assess if the content within it breaks the company’s safety policy for minors.
It is hoped that automating this process will improve efficiency around removing indecent content but also it will free up staff to concentrate on reviewing more difficult content to assess, such as hate speech and misinformation. TikTok explain by saying, ‘we'll begin using technology to automatically remove some types of violative content identified upon upload, in addition to removals confirmed by our Safety team. Automation will be reserved for content categories where our technology has the highest degree of accuracy, starting with violations of our policies on minor safety, adult nudity and sexual activities, violent and graphic content, and illegal activities and regulated goods.’
As should be expected, any accounts found posting violating content will be deactivated as part of a zero-tolerance policy. However, there will still be some human guidance, meaning the process will not be completely automated, as TikTok explains in a newsroom announcement, ‘Creators will be able to appeal their video's removal directly in our app or report potential violations to us for review, as they can today.[...] Our Safety team will continue to review reports from our community, content flagged by technology, or appeals, and remove violations.‘ It would be quite unwise to completely leave the regulation of the user base up to an algorithm, aware of this, TikTok elaborate by stating, ‘While no technology can be completely accurate in moderating content, where decisions often require a high degree of context or nuance, we'll keep improving the precision of our technology to minimize incorrect removals.’
Alongside the improved safety for the TikTok user base, the company also hopes that it will give a level of safeguarding for its Safety team because they will have to watch less videos that could cause upset. On this topic they write, ’we hope this update also supports resiliency within our Safety team by reducing the volume of distressing videos moderators view and enabling them to spend more time in highly contextual and nuanced areas, such as bullying and harassment, misinformation, and hateful behavior.’
TikTok originally developed and launched this technology in areas of their business that required more safety support because of the Coronavirus pandemic. From that initial time period to now, they found that the automatically removed content had a consistent level of feedback, with 5% of removed content receiving appeals.
Alongside introducing new technology, TikTok has made changes to the way they notify their user base if Community Guidelines have been violated. In this change, the amount of violations a user has made will be counted alongside the severity and frequency of them. The results of a violation will be delivered to a user in the Account Updates part of their inbox. In this section, a user can also view past violations, if there have been any.
On the TikTok website they explain how the violation system works:
- Send a warning in the app, unless the violation is a zero-tolerance policy, which will result in an automatic ban.
After the first violation
- Suspend an account's ability to upload a video, comment, or edit their profile for 24 or 48 hours, depending on the severity of the violation and previous violations.
- Or, restrict an account to a view-only experience for 72 hours or up to one week, meaning the account can’t post or engage with content.
- Or, after several violations, a user will be notified if their account is on the verge of being banned. If the behavior persists, the account will be permanently removed.
The above system works alongside the zero tolerance approach to content that breaks the safety policy for minors. Further to all this, TikTok may also block devices in an attempt to stop new accounts getting created in the future.
TikTok make a clear point of saying that the appeals process is extremely important in a proper regulation system, ‘while we strive to be consistent, neither technology nor humans will get moderation decisions correct 100% of the time, which is why it's important that creators can continue to appeal their content's or account's removal directly in our app.’
Ensuring that users are kept safe online should be the very top priority for tech companies and social media websites. Seeing TikTok taking active steps to engage their community and improve the safety of their platform is a welcome sight.
A newly discovered spyware effort attacked users through 32 million downloads of extensions to Google's market-leading Chrome web browser, researchers at Awake Security told Reuters, highlighting the tech industry's failure to protect browsers
Google Ads and Facebook ads are two of the biggest platforms in the Pay Per Click (PPC) advertising world. Both Google and Facebook have access to unparalleled audience sizes. If implemented correctly, they can bring a lot of new clients to a business. In an ideal world, a company would be able to market across both of these platforms.