Minutes after the presidential debate ended on Tuesday, Taylor Swift mobilized her enormous fanbase in support of Kamala Harris by endorsing her in an Instagram post that quickly garnered 8 million likes. Swift’s decision wasn’t altogether surprising, given that she supported Joe Biden in the 2020 election and recently offered hints, in true Taylor fashion, that she was headed in this direction.
But what was especially notable in her Instagram post was that it spent as much time praising Kamala Harris as it did warning the public about the dangers of AI.
“Recently I was made aware that AI of ‘me’ falsely endorsing Donald Trump’s presidential run was posted to his site. It really conjured up my fears around AI, and the dangers of spreading misinformation,” Swift wrote. “It brought me to the conclusion that I need to be very transparent about my actual plans for this election as a voter. The simplest way to combat misinformation is with the truth.”
Swift was referring to a post from Trump in August on Truth Social, his social media site, which appeared to show the superstar and her fans endorsing him. He captioned the photo with: “I accept.” But the images looked glossy and had strange visual details, because they were created with AI.
Many viewers of the images were able to immediately identify them as fabricated. And following Swift’s post, it appears that her response refuting the images had a greater impact than the AI images themselves. But the incident could be a harbinger of plenty of AI-driven conflict in elections for years to come.
“We are already in a bit of a crisis where a lot of American voters don’t trust elections,” says Craig Holman, a government affairs lobbyist at the nonprofit Public Citizen. “If we’re going to have this type of campaign going on all around us, feeding us information that doesn’t exist, trying to influence our votes based on that—the entire integrity of elections is very much at risk.”
Deepfakes proliferate around celebrities, elections
During the 2020 presidential election, AI tools were still largely rudimentary. In the time since, the capabilities of these tools have improved at an astounding clip. Users around the world can now use AI to create realistic images, images, and audio. Fake social media profiles that spread propaganda can be created cheaply; political parties can use AI to quickly send personalized messages to thousands of potential voters; and fake event photography and even voicemails that sound like celebrities can be put together easily.
Some of these tools have been used in political influence campaigns. Last year, the RNC released an AI-generated video depicting a future dystopia if Joe Biden were to be re-elected. Elon Musk shared an AI image photo of Kamala Harris in Soviet-style garb, writing on X that she wants to be a “communist dictator from day one.” A fake video of a Chicago mayoral candidate making inflammatory comments about police shootings was released on the eve of that election in February and watched thousands of times on X before it was taken down. And during the Indian election this year, deepfakes were deployed en masse to create misleading videos of Bollywood celebrities and ads with Hindu supremacist language.
Read More: As India Votes, Modi’s Party Misleads Online
Taylor Swift has been the frequent subject of many AI efforts, given her massive celebrity. Early this year, AI-generated pornographic and sometimes violent images of her were widely circulated on social media. The images helped spur legislation in the U.S. aimed at protecting deepfake victims, including the DEFIANCE Act, which allows deepfake victims to sue people who create, share or receive them, and passed the Senate in July. AI companies also scrambled to respond: Microsoft said that it was “continuing to investigate these images” and added that it had “strengthened our existing safety systems to further prevent our services from being misused to help generate images like them.”
And Swift’s involvement is part of a growing backlash against AI from some of the world’s most prominent cultural figures. Beyonce recently spoke out against AI misinformation in a GQ interview, saying: “We have access to so much information – some facts, and some complete bullshit disguised as truth…Just recently, I heard an AI song that sounded so much like me it scared me. It’s impossible to truly know what’s real and what’s not.” Meanwhile, earlier this year, Scarlett Johansson blasted OpenAI for releasing a chatbot voice seemingly modeled upon hers.
How Trump’s deepfake move ultimately backfired
Trump has had a long-standing fascination with Swift, including calling her “fantastic” in 2012 and “unusually beautiful” in 2023. In February, Trump took credit for some of Swift’s success, posting on Truth Social that if she were to endorse Joe Biden, it would be “disloyal to the man who made her so much money.”
But when Trump decided to post the deepfakes on Truth Social in August, his attempt at collecting Swifties appeared to have backfired. The post allowed Swift to frame her endorsement of Harris as a moral obligation; as if she had no other choice but to respond to misinformation. It also sucked up all the oxygen that Trump hoped to gain on debate night: by Wednesday morning, “Taylor Swift endorsement” was the second trending topic on Google, trailing only “who won the debate.”
In her early years of fame, Swift refrained from speaking about politics, telling TIME in 2012 that she didn’t believe she knew “enough yet in life to be telling people who to vote for.” Over the last six years, she’s waded into politics sparingly, but with purpose, always giving strong justifications for her statements. In 2020, for example, she accused Trump of "stoking the fires of white supremacy and racism your entire presidency." This year, Swift remained silent on politics until last night’s endorsement, garnering criticism from many people who urged her to use her unrivaled platform to make a difference.
Read More: Watch Tim Walz React to Endorsement From 'Fellow Cat Owner' Taylor Swift
It’s unclear what impact these efforts have had on voters: many researchers argue that voters are more discerning than people fear, and that the potential influence of AI misinformation on elections is overblown.
However, Holman, at Public Citizen, says that those studies relied upon outdated AI tools. He points to a deepfakes database created by researchers at Northwestern earlier this year, which has documented hundreds of political deepfakes, many of which have resulted in real-world harms, the researchers found.
“We’re in a whole new era right now,” Holman says. “Technology has become so convincing, so persuasive, and so indistinguishable from reality, that I am quite convinced it's going to have a much more serious ramifications on future election cycles.”
More Must-Reads from TIME
- Introducing the 2024 TIME100 Next
- Sabrina Carpenter Has Waited Her Whole Life for This
- What Lies Ahead for the Middle East
- Why It's So Hard to Quit Vaping
- Jeremy Strong on Taking a Risk With a New Film About Trump
- Our Guide to Voting in the 2024 Election
- The 10 Races That Will Determine Control of the Senate
- Column: How My Shame Became My Strength
Contact us at letters@time.com