Social media platform X, formerly Twitter, has blocked searches of Taylor Swift’s name to crack down on the proliferation of sexually explicit deepfakes of the pop star, revealing the challenges of tackling such content.
“This is a temporary action and done with an abundance of caution as we prioritize safety on this issue,” Joe Benarroch, head of business operations at X, tells TIME in an email Sunday. At the time of publication, searches of Taylor Swift’s name on X were still inaccessible.
On Friday, X’s Safety account on the platform posted that non-consensual nudity is strictly prohibited and the company has a “zero-tolerance policy towards such content.” Teams were “actively removing all identified images and taking appropriate actions against the accounts responsible for posting them,” the statement read. “We're committed to maintaining a safe and respectful environment for all users.”
TIME has reached out to Swift’s publicist for comment.
The Swift deepfakes have spotlighted the issue of AI-generated pornography, where the face of an individual is superimposed onto an explicit picture or video without their consent, and the need for more legal protections as regulation struggles to keep pace with technology. As X acknowledged, banning searches is a stopgap measure. Although it has been used to effect in this particular case, it’s not a long-term fix for the wide-reaching problem.
Ninety-six percent of all deepfakes online in 2019 were nonconsensual pornography, almost all of women, media analyst Sensity found, and the FBI warned last year that this content can be used for sextortion and criminal financial gain.
“It is alarming,” White House Press Secretary Karine Jean-Pierre told reporters on Friday when asked about the Swift deepfakes. “While social media companies make their own independent decisions about content management, we believe they have an important role to play in enforcing their own rules to prevent the spread of misinformation and non-consensual intimate imagery of real people.”
More From TIME
While 10 states criminalize deepfakes, federal law does not, and legislators in Congress are stepping up to try to outlaw the content.
Rep. Joe Morelle, a Democrat from New York, introduced the Preventing Deepfakes of Intimate Images Act last year, but the bill has not moved forward yet. Another New York Democrat, Rep. Yvette Clarke, introduced the DEEPFAKES Accountability Act last year to protect against national security threats and provide legal recourse to victims.
In January, legislators introduced the No Artificial Intelligence Fake Replicas And Unauthorized Duplications (No AI Fraud) Act to protect Americans from AI-manipulations of their voice and images.
In response to a reporter’s question about proposed law, Jean-Pierre said “there should be legislation, obviously, to deal with this issue,” but pointed out that President Joe Biden already took action by authoring an executive order on how his administration would govern generative AI and creating a task force to tackle online harassment.
More Must-Reads from TIME
- Caitlin Clark Is TIME's 2024 Athlete of the Year
- Where Trump 2.0 Will Differ From 1.0
- Is Intermittent Fasting Good or Bad for You?
- The 100 Must-Read Books of 2024
- Column: If Optimism Feels Ridiculous Now, Try Hope
- The Future of Climate Action Is Trade Policy
- FX’s Say Nothing Is the Must-Watch Political Thriller of 2024
- Merle Bombardieri Is Helping People Make the Baby Decision
Contact us at letters@time.com