TikTok is taking a bold step towards empowering its users by giving them control over the AI-generated content they see on their feeds. This move comes at a time when the platform has revealed a staggering 1 billion AI videos, sparking a crucial conversation about the role of artificial intelligence in content creation.
But here's where it gets controversial: TikTok is not just reacting to the surge in AI content online; it's actively encouraging its use. With new video-generating tools like OpenAI's Sora and Google's Veo 3, the platform is witnessing a rapid increase in AI-generated videos, and it's not shying away from embracing this trend.
The Guardian's investigation in August uncovered a disturbing trend on YouTube, where nearly 1 in 10 of the fastest-growing channels globally were solely dedicated to AI-generated content. This 'AI slop,' as it's been aptly named, refers to low-quality, mass-produced videos that often make little sense or are downright surreal.
TikTok's European director of public policy for safety and privacy, Jade Nester, addressed this concern, stating, "We want to give people the power to choose the amount of AI content they engage with, based on their preferences."
And this is the part most people miss: despite the overwhelming amount of AI content, TikTok's overall inventory still consists of over 100 million pieces of content uploaded daily, making AI-generated videos a small fraction of the whole.
Users now have the ability to customize their feeds by adjusting the "manage topic" setting, specifically targeting "AI-generated content." This feature allows users to reduce or increase the visibility of AI content, similar to how they can already filter current affairs, fashion, beauty, and dance topics.
TikTok's guidelines mandate that creators label "realistic" AI-made content, and it strictly prohibits harmful deepfakes of public figures or crisis events. Any unlabeled realistic AI video can be removed under the platform's community policies.
In a further effort to enhance transparency, the app will now watermark content made with its own AI tools or those identified by the industry-wide C2PA initiative. This move aims to prevent users from bypassing the labeling process.
Additionally, TikTok is launching an AI literacy fund worth $2 million, partnering with experts and organizations like Girls Who Code to create educational content promoting responsible AI usage.
However, amidst these initiatives, TikTok faces criticism for its moderation strategy, particularly its plans to make hundreds of UK-based content moderators redundant. The platform's decision to cut 439 jobs in its trust and safety team in London has raised concerns about the replacement of human moderators with AI systems.
TikTok's global head of program management for trust and safety, Brie Pegum, defends this approach, stating, "Human moderation will continue to play a vital role, but AI can protect employees by filtering out the most distressing content before human eyes see it." TikTok claims a 76% decrease in shocking and graphic content seen by human moderators over the past year, thanks to its automated systems.
"We believe in striking a balance between humans and technology to maintain platform safety," Pegum added. "While humans will remain integral to the process, reducing exposure to harmful content as quickly as possible is crucial, and that's where machine support excels."
So, what do you think? Is TikTok's approach to AI content a step towards a more personalized and safer online experience, or does it raise concerns about the role of AI in content moderation? We'd love to hear your thoughts in the comments!