TikTok's ad tool allows AI actors to recite unmoderated content
What's the story
TikTok inadvertently posted a link to an internal version of its new AI digital avatar tool, Symphony Digital Avatars, allowing users to produce videos that can say virtually anything.
This mistake was initially spotted by CNN and it permitted the outlet to generate videos with inappropriate content including quotes from Hitler and messages encouraging people to drink bleach.
The company has since removed this version of the tool while the official version remains available for use.
About the tool
Symphony Digital Avatars tool: Purpose and misuse
The Symphony Digital Avatars tool was launched earlier this week, to let businesses create ads using AI-powered dubbing technology. Simply input a script to have AI avatars say desired content within TikTok's guidelines.
The tool was meant for users with a TikTok Ads Manager account. However, due to a technical error, it became accessible to anyone with a personal account.
This led to the generation of videos containing harmful messages and inappropriate quotes.
Error rectification
TikTok rectifies error and addresses potential misuse
TikTok spokesperson Laura Perez confirmed that the company has corrected the "technical error" which "allowed an extremely small number of users to create content using an internal testing version of the tool for a few days."
As reported by CNN, the internal tool also allowed the outlet to generate videos that recited Osama bin Laden's "Letter to America," a white supremacy slogan, and a message telling people to vote on the wrong day.
Misuse concerns
Concerns raised over digital avatar creators' misuse
Despite TikTok's swift action in removing the internal version of its tool, the incident has raised questions about potential misuse of digital avatar creators.
The videos generated using the internal tool did not have a watermark indicating they were AI-generated.
While CNN didn't post the videos created, Perez stated that if such videos had been posted, they "would have been rejected for violating our policies."
Twitter Post
The tool poses a risk for creating deepfakes, say experts
The quality on the videos were really high. Deep fake expert Hany Farid said they were some of the best of that type he’s seen.
— Jon Sarlin (@jonsarlin) June 21, 2024
Some of them were hard to spot as AI had no markers indicating they are AI (we added the ones included here). pic.twitter.com/nl6AqJFkCd