Just imagine: It’s the morning after Election Day 2024, and America wakes up to viral videos of ballots being tossed into dumpsters and security-camera footage of poll workers stuffing boxes in battleground states. Within hours, politicians, pundits, and social media influencers are crying foul. But here’s the twist—none of these events actually happened. They’re digital deepfakes, designed to look so real that even seasoned analysts can’t tell the difference.
The scenario isn’t far-fetched, but we are far from prepared. I’ve spent the past year leading war games on the challenge of AI-powered threats to elections alongside technologists, current and former government officials, and civil society groups. Regardless of who wins the presidential race, one of our gravest concerns is the very real possibility of a “November surprise”—not just a late-breaking controversy before Election Day, but a post-election blitz of AI-generated forgeries meant to convince voters that the election was stolen.
The groundwork for a crisis has already been laid.
First, the field of bad actors has grown. After the 2016 election, I went to the Department of Homeland Security where I helped overhaul U.S. election security in the aftermath of Russia’s interference. You’d think that the public exposure of Moscow’s meddling would deter future attackers. But election interference has since become a professional sport, with more teams entering the game, like China and Iran.
Second, the post-election period is now in the crosshairs. The 2020 “Stop the Steal” movement showcased how Americans—especially the losing side—is susceptible to claims that the system is rigged. Foreign adversaries and domestic extremists learned that they don’t need to alter the vote count itself; they just need to sow doubt after the fact, when emotions are highest and faith in the outcome is weakest.
Third, election officials admit they are unprepared. In my travels across the country, state and local leaders repeatedly told me that one of their biggest fears is the emergence of authentic-looking “evidence” that an election was stolen that they can’t readily disprove. Unlike in 2020, when most false claims were thrown out, AI forgeries today are easy to make and could take weeks—or even months—to debunk. By then, the damage will be done.
Read More: How to Protect Yourself from AI Election Misinformation
Worse still, the relationship between government officials and social media companies is more fractured than ever. Court cases and controversies about online censorship have created a chilling effect, making both sides reluctant to cooperate. The result is that localities have fewer technical resources to rely on in a crisis.
To be clear, AI is not necessarily the enemy here. Machine-learning technologies will eventually help better protect our elections. Startups are emerging to combat these risks, from deepfake-detection startups to firms enhancing the cybersecurity of election networks. But today we are in a dangerous “arbitrage window” during which bad actors can go on offense before the good guys have the tools to mount a solid defense.
So what’s the remedy between now and Election Day?
The best answer is public awareness. Voters must develop pattern recognition for these spoofs the same way we did when spam started hitting our email inboxes in the 1990s. Deepfakes will become the new “Nigerian princes” in our inboxes, telling us to wire them $10,000. Only now they’ll be more personalized, more persuasive, and more pervasive, including when it comes to deceiving us about our own democracy.
In the long run, we need to overhaul our election security architecture. Yes, we made enormous strides after Russia’s 2016 attacks, but America is vulnerable again. Congress should require agencies to invest in technology that better safeguards voting systems, empower officials to identify synthetic content that is being used for fraud or voter suppression (without becoming arbiters of truth), and create the right guardrails for engagement between election workers and social media companies.
Meanwhile, tech industry players should accelerate efforts to help users spot “real” or “manipulated” media by default, including through cryptographic signatures and watermarks.
In a recent simulation, I saw just how easily a post-election deepfake could tip a bad situation into total chaos. The “red team” —playing the role of a foreign adversary—was able to craft a realistic video purporting to show poll workers destroying ballots in a swing state. As the clip spread, the “blue team”—playing the role of election officials—scrambled to verify the video’s authenticity while commentators fanned the flames. By the time we proved it was a fake, the damage was done. Voters had taken to the streets.
A corrupt candidate or campaign could wreak havoc by doing the same. Just last month, it was reported that the chair of the Senate Foreign Relations Committee was duped into a Zoom call with a top Ukrainian official who turned out to be a deepfake. Little imagination is needed to see how political operatives could use these tactics to pose as credible figures in order to share false information about election results with their opponents or supporters, to spread fabricated opposition research, or to distract other candidates in a crisis. Think of it as dirty tricks on steroids.
This is the future we’re hurtling toward, and it’s not enough to be react quickly when it arrives. We need to talk about it now—before votes are cast and the battle for legitimacy begins. Otherwise, America could wake up on November 6th to a terrifying surprise: a digital war for democracy where truth is the first casualty.
More Must-Reads from TIME
- Caitlin Clark Is TIME's 2024 Athlete of the Year
- Where Trump 2.0 Will Differ From 1.0
- Is Intermittent Fasting Good or Bad for You?
- The 100 Must-Read Books of 2024
- Column: If Optimism Feels Ridiculous Now, Try Hope
- The Future of Climate Action Is Trade Policy
- FX’s Say Nothing Is the Must-Watch Political Thriller of 2024
- Merle Bombardieri Is Helping People Make the Baby Decision
Contact us at letters@time.com