Taylor Swift Deepfakes Highlight the Need for New Legal Protections

5 minute read

Deepfake pornographic images of Taylor Swift have been shared across the social media platform X, highlighting the lack of digital privacy protections for victims across the globe. 

It isn’t known who generated the fake images of Swift, which have been viewed tens of millions of times since Wednesday. On Friday, X said their team was working to remove all non-consensual nudity from their site, which is “strictly prohibited.”

“We're committed to maintaining a safe and respectful environment for all users,” the company said. Swift has not publicly commented on the matter. 

Swift may be the latest celebrity target of deepfakes, but Carrie Goldberg, a New York City based lawyer who works with victims of tech abuse, says that she’s seen an increase in children and non-celebrities falling victim to this form of online abuse over the past decade. “Our country has made so much progress in banning the non-consensual dissemination of nude images, but now deepfakes are sort of filling in that gap [of legal protections],” Goldberg says. 

Deepfakes—manipulated media files that depict a false image or video of a person—have been increasing in popularity over the past few years. Previous estimates by Wired show that in the first nine months of 2023, at least 244,635 deepfake videos were uploaded to the top 35 websites that host deepfake pornography. 

Ten states—like Virginia and Texas— have criminal laws against deepfakes, but there is currently no federal law in place. In May 2023, Rep. Joe Morelle, a Democrat from New York, introduced the Preventing Deepfakes of Intimate Images Act to criminalize the non-consensual sharing of sexual deepfake images online.The bill was referred to the House Committee on the Judiciary, but has not seen any progress since. In January, legislators also introduced the No Artificial Intelligence Fake Replicas And Unauthorized Duplications (No AI Fraud) Act, which would protect Americans from having their images and voice manipulated.

Advocates warn that this issue especially poses a risk for young women, who are overwhelmingly the victims of deepfakes. “Deepfakes are a symptom of a broader problem [of] online violence against women and girls that has historically not been prioritized by tech companies and society,” says Adam Dodge, founder of Endtab (Ending Tech-Enabled Abuse), a digital safety education and training company for victims of online harassment. “​​I'm hopeful that this Taylor Swift attack shines a bright enough light on an issue that's been around for years that we actually see action to prevent and hold accountable the people that are creating and sharing these images.”

Legal protections for victims 

Deepfakes, which Dodge describes as a “form of face-swapping,” are alarmingly easy to make. Users don’t need any experience with coding or AI to generate them. Instead, online platforms can generate them for online users with just a few clicks and the submission of a photo or video. Deepfakes can be used for explicit content, but can also be used to generate false audio messages which have the potential to disrupt elections, for instance.  

Experts warn that there is an expansive system of companies and individuals that benefit from and could be liable for deepfakes. “Starting at the very top, there's a search engine where you can search ‘How do I make a deepfake’ that then will give you a bunch of links,” Goldberg says. “There's the products themselves which exist just for malicious purposes…the individual who's actually using the product to create the database, and then the viewers who might be [sharing] it.”

Dodge says that because the internet facilitates the spread of content so quickly—Swift’s deepfakes, for instance, had more than 27 million views and 260,000 likes in 19 hours, NBC News reports—its nearly impossible to remove all fake content from the internet. “It’s deeply concerning when time is of the essence and every second that that image is up it's getting shared and downloaded at an exponential rate,” he says. Companies like Google and X ban the sharing of any misleading media, but may still be slow to act or take down the media files.

Holding social media platforms legally liable for the dissemination of deepfakes is difficult due to protections under Section 230 of the Communications Decency Act. The law says that “no provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider,” meaning platforms like Instagram or Facebook are not responsible for the third-party content uploaded on their site. 

Goldberg, however, says it's possible to hold a company responsible if there’s a unique feature that allows that platform to perpetuate harm. It’s how Goldberg won a case to shut down Omegle, an online chat room that allowed for anonymous video streaming, in Nov. 2023 for facilitating child sex abuse.

Still, Dodge warns that the U.S. lacks infrastructure needed to properly help victims of deepfakes. “Law enforcement is not properly trained or staffed to go after these anonymous attackers and as a result, victims who experienced this meet roadblocks to justice really quickly,” he says. Part of that is because investigators may not understand how deepfakes function; Dodge says that many victims he’s spoken to have to take on the burden of figuring out how to remove the images themselves. 

The solution, experts say, would require the law to stop protecting companies that profit off of these sorts of images and videos, especially since they are so easy to generate. “We can't keep somebody from taking our photograph…you can't blame the victim here,” Goldberg says. “All they've done is exist.”

More Must-Reads From TIME

Contact us at letters@time.com