AI Isn’t Our Election Safety Problem, Disinformation Is

7 minute read
Ideas
Barrett is the deputy director of the NYU Stern Center for Business and Human Rights. Hendrix is the editor of Tech Policy Press and an adjunct professor at NYU’s Tandon School of Engineering.

This election cycle will be the first exposed to generative artificial intelligence—the technology behind popular apps like ChatGPT that enables even non-experts to create fake, but realistic-looking text, video, and audio perfectly suited for political manipulation. At the same time, a number of the major social-media companies have retreated from some of their prior commitments to promote “election integrity.” The November election is also the first that will register the impact of the enormous popularity of TikTok, which uses a recommendation algorithm that some experts believe is particularly suited to spreading misinformation.

Let’s start with the rise of generative AI, which allows virtually anyone to produce persuasive text, imagery, or sound based on relatively simple natural-language prompts. In January, Facebook circulated a fake AI-generated image of Donald Trump sitting next to Jeffrey Epstein on the disgraced financier and sex offender’s private jet. In February, a Democratic consultant working for a long-shot rival admitted that he commissioned an AI-generated robocall impersonating President Joe Biden that sought to discourage thousands of voters from participating in New Hampshire’s primary. The state’s attorney general has opened a criminal investigation.

The U.S. is not alone on this score. Last September, an audio clip posted on Facebook just two days before Slovakia’s national election for prime minister appeared to capture the candidate of the pro-NATO, pro-Ukraine Progressive Slovakia Party discussing how to rig the results. The AI-crafted fake audio post went viral during a pre-election media moratorium, limiting the degree to which it could be debunked, and the candidate lost to a pro-Russia rival.

Analysts are struggling to keep up with the possibilities. A study published in February in the academic journal PNAS Nexus finds that any political ad maker armed with generative AI tools has the capacity to build “a highly scalable ‘manipulation machine’ that targets individuals based on their unique vulnerabilities without requiring human input.” This suggests cause for concern that the sort of Russian digital operatives who interfered in the 2016 election in support of Donald Trump could use AI as a force multiplier to once again seek to inflame the already polarized American electorate.

More From TIME

But AI is far from the whole story. The threat of artificially generated disinformation is made all the more daunting by a more familiar technology: social media. 

Despite the confusion and violence caused by Trump’s attempt to undermine the 2020 election result and the threat of similar volatility this year, major platforms like Facebook, YouTube, and, most dramatically, X have backed away from some of their past election-integrity policies, according to a new report we co-authored for the Center for Business and Human Rights at New York University’s Stern School of Business.

Any discussion of backsliding necessarily begins with X, which Elon Musk acquired in October 2022. Competition from Meta’s new Threads app, as well as Musk-related controversies, have cost X users. But an average of more than 220 million people still visit the site each month, and it retains the loyalty of many influential politicians, journalists, and entertainers.  By spring 2023, Musk had fired 6,000, or 80%, of X’s employees. In September 2023, he said of the platform’s election integrity unit, “Yeah, they’re gone.” The new owner’s scaling back of content moderation in the name of promoting free speech has contributed to a surge of racist and antisemitic expression that prompted advertisers to quit X in droves.

Following hiring binges during the COVID-19 pandemic, other social media companies also have executed mass layoffs, with a number of them reducing their “trust and safety” teams—the people who craft and enforce content policies. 

For the 2020 election, Meta built a 300-person unit dedicated to election integrity. But despite the chaos that erupted when Trump supporters stormed the Capitol on January 6, 2021, the company subsequently reduced the size of that team to about 60 people and ended regular meetings between the group’s leaders and CEO Mark Zuckerberg. Meta officials told us that some former team members who have new assignments still contribute to election-integrity efforts, and that top management is kept apprised of this work. But the signals are decidedly mixed.

Meta, to its credit, still funds an industry-leading network of more than 90 outside fact-checking organizations that helps the platform de-emphasize and label false content. But the company continues to exempt politicians from having their statements fact-checked. YouTube, meanwhile, has rescinded a policy under which it earlier had removed tens of thousands of videos falsely claiming that the 2020 election was illegitimate. Both companies argue their new approaches encourage free speech.

Meta has also said that, with the same goal of promoting free-flowing debate, it will loosen its rules to allow political ads on Facebook and Instagram to question the legitimacy of the 2020 presidential election. Sure enough, in August 2023, Facebook allowed the Trump campaign to run a spot declaring: “We had a rigged election in 2020.”

TikTok, which now has upwards of 1 billion average monthly users worldwide and more than 170 million in the U.S. alone, presents new and entirely different challenges. Its connection to Chinese parent corporation ByteDance has led to allegations—so far, unproven—that the autocratic Beijing government exerts influence over the U.S. platform. The House of Representatives just approved legislation that would force ByteDance to sell TikTok or face a U.S. ban of the short-video app.

But there are also questions about TikTok’s recommendation algorithm, which selects the content presented to users. Other major platforms rely on a “social graph,” which chooses content based on what is shared by the people whom they follow and who follow them. TikTok, by contrast, selects short videos for its “For You” page based on algorithmic recommendations of content from outside of their social network, as well. This difference may help explain why TikTok is so successful at serving up videos that users find novel and compelling.

But the distinction could present a danger during election season, according to researchers with NYU’s Center for Social Media and Politics: “With generative AI making fabricated videos easier to produce,” the researchers wrote in January, “we could see political misinformation reaching users on TikTok that it wouldn’t reach on other social graph-based platforms.” What’s more, TikTok users are younger, and studies show that young people are more likely to believe misinformation. TikTok told us that this analysis is “entirely speculative” and reflects “a poor understanding of how our platform works.”

There is still time for platforms to take pre-election precautions. They can impose limits on rampant re-sharing of content, which is one way that misinformation spreads. They can institute “circuit breakers” on certain viral posts to give content moderators an opportunity to determine whether the material is malicious. They can replenish their depleted content moderation teams. And they can remove demonstrably false content from users’ feeds, while retaining a marked, archived copy that’s not sharable but is available as a reference for those who track misinformation.

The social media industry should view these and other protective steps as the cost of doing business responsibly during what is shaping up to be another volatile election season. Inaction could exacerbate a true crisis for U.S. democracy.

More Must-Reads from TIME

Contact us at letters@time.com

TIME Ideas hosts the world's leading voices, providing commentary on events in news, society, and culture. We welcome outside contributions. Opinions expressed do not necessarily reflect the views of TIME editors.