X blocks 'Taylor Swift' searches as deepfakes raise concerns
Sexually explicit deepfake images of Taylor Swift have caused a stir on social media, prompting United States politicians to call for action against the issue. The images were found recently on X, which has since removed the content and limited searches for related terms. Per BBC, one such explicit image of Swift reached 47 million views before being taken down. Currently, no federal laws regulate deepfake creation or sharing, but some American states are beginning to address this growing concern.
Why does this story matter?
Deepfakes leverage artificial intelligence (AI) to create manipulated videos of individuals, altering their facial features or physique. A recent study indicated a 550% surge in the creation of such manipulated images since 2019. Pornographic content constitutes the vast majority of deepfakes shared online, with women accounting for 99% of those depicted, revealed a State of Deepfakes report last year. Consequently, there has been a growing demand among US politicians for legislation to outlaw the production of deepfake content.
Democratic representatives call for urgent action on deepfakes
Democratic US Representative from New York Joe Morelle described Swift's deepfakes as "appalling." He highlighted that such images can lead to "irrevocable emotional, financial, and reputational harm," with women being disproportionately affected. Notably, Morelle introduced the proposed Preventing Deepfakes of Intimate Images Act last year to make sharing deepfake pornography without consent illegal. Another Democratic New York representative, Yvette Clarke, tweeted this issue should bring all political parties and even Swift's fans together to find a solution.
Republican Congressman voiced apprehensions about AI technology
Republican Congressman Tom Kean Jr. from New Jersey said it was "clear that AI technology is advancing faster than the necessary guardrails." He added, "Whether the victim is [Swift] or any young person across our country, we need to establish safeguards to combat this alarming trend."
X to take 'appropriate action' against accounts spreading Swift's deepfakes
In response to Swift's explicit deepfake, X announced it would take "appropriate actions" against the accounts responsible for spreading the content. The Elon Musk-owned company stated, "We're closely monitoring the situation to ensure that any further violations are immediately addressed, and the content is removed." Subsequently, the X account that posted the deepfake images has reportedly been deactivated by the user. While Swift has not spoken publicly about the images, Daily Mail reported that her team was "considering legal action."
X has also blocked Swift-related searches on platform
X has taken measures to render Swift's name unsearchable in a bid to mitigate the spreading of viral deepfakes. Searches for "Taylor Swift" on the platform have been temporarily suspended. Additionally, queries containing terms such as "Taylor Swift AI" and "Taylor AI" yield error pages. Nonetheless, some similar words are still searchable.
AI-generated deepfakes not only limited to images
Separately, last week, an investigation was launched following a fraudulent robocall claiming to be from US President Joe Biden. This fabricated audio message advised New Hampshire voters not to participate in their state's primary elections. Initially, the technology employed to mimic Biden's voice was unclear. However, the deepfake audio was later traced to voice-cloning tools developed by ElevenLabs. This occurrence heightened apprehensions as it tried to influence voters and manipulate the upcoming US elections.