Meta announced Wednesday (Nov. 8) that it would require labels for political ads that have been digitally altered, using AI or other technology, in ways that could be misleading.
The announcement comes a day after Microsoft announced it was also taking a number of steps to protect elections, including offering tools to watermark AI-generated content and deploying a “Campaign Success Team” to advise political campaigns on AI, cybersecurity, and other related issues.
Next year will be the most significant year for elections so far this century, with the U.S., India, the U.K., Mexico, Indonesia, and Taiwan all headed to the polls. Although many are concerned about the impact deepfakes and misinformation could have on elections, many experts stress the evidence for their impacts on elections so far is limited at best. Experts welcome the measures taken by tech companies to defend election integrity but say more fundamental changes to political systems will be required to tackle misinformation.
How AI could impact elections
Tech companies have come under scrutiny after the role they played in previous elections. A report published in 2021 by online advocacy group Avaaz found that by waiting until the month bnefore the U.S. presidential election in 2020 to change its algorithms, Meta allowed misinformation to spread on its platform. Meta has also been criticized for promoting content that questioned the validity of the 2022 Brazilian election and for amplifying content that may have contributed to human rights violations perpetrated against Myanmar’s Rohingya ethnic group.
In recent years, AI development has progressed rapidly. Generative AI—which allows users to generate text, audio, and video—emerged from obscurity with the release of OpenAI’s ChatGPT in November 2022.
Generative AI has already been used multiple times in U.S. political advertisements, although in each case it was understood that AI had been used. In April, the Republican Party released an AI-generated ad meant to show what might happen if President Joe Biden were reelected. In June, an X campaign account for Ron DeSantis, the Republican governor of Florida and presidential candidate, posted a video that included AI-generated images of former President Donald Trump embracing Dr. Anthony Fauci.
A poll published in November found that 58% of U.S. adults are concerned AI could increase the spread of false information during the 2024 presidential election.
Read More: Federal AI Regulation Draws Nearer
However, studies consistently find that misinformation has not measurably changed the outcomes of previous U.S. elections, says Andreas Jungherr, a professor of political science at the University of Bamberg in Germany, who notes that academics from other disciplines, such as media studies and psychology tend to give more credence to the impacts of misinformation on elections.
For example, a paper published in Nature in 2023 found “no evidence of a meaningful relationship between exposure to the Russian foreign influence campaign [in 2016] and changes in attitudes, polarization, or voting behavior.”
Often people overestimate the effects of misinformation because they overestimate both how easy it is to change people’s views on charged issues such as voting behavior and how capable misinformation-enabling technologies such as AI are, says Jungherr.
While the chances that a piece of AI-generated misinformation could influence public opinion enough to sway elections are very low, AI could still have pernicious effects on elections and politics more broadly in the future, says Elizabeth Seger, a researcher at the Centre for the Governance of AI, a research organization based in the U.K. In the future, highly-personalized AI-enabled targeting coupled with extremely persuasive AI agents could be used to carry out mass persuasion campaigns. And in 2024, the mere existence of deepfakes could erode people’s trust in key information sources, says Seger.
This has already happened. In 2019, an allegation that a video of Ali Bongo, then the president of Gabon, was a fake was used to justify an attempted coup. Later that year, the impact of a suspected video of a Malaysian cabinet minister engaged in a sexual act with a political aide was neutered by claims that the video was altered by AI.
“A risk that is often overlooked, that is much more likely to take place this election cycle, isn't that generative AI will be used to produce deepfakes that trick people into thinking candidate so and so did some terrible thing,” says Seger. “But that the very existence of these technologies are used to undermine the value of evidence and undermine trust in key information streams.”
Election protection
So far governments have taken limited measures to mitigate the impact of AI on elections. In the U.S., a bill was introduced in Congress in May that would require political ads that use AI-generated images or video to carry a disclaimer. In August, the Federal Election Commission requested input on whether it should amend the regulation that prohibits political ads from misrepresenting other candidates or political parties to include deliberately deceptive AI, but has yet to take action.
Tech companies, wary of the reputational damage sustained by Meta for its role in the 2020 election, are starting to take action. In July, the White House secured voluntary commitments aimed at ensuring AI systems are safe and trustworthy from seven leading AI companies, including Meta and Microsoft. One of the commitments was to develop and deploy provenance or watermarking techniques for AI-generated visual and audio content. Eight more companies signed on to the commitments in September.
Meta and Microsoft’s moves this week follow an announcement from Alphabet, the parent company of Google and YouTube, that it would require visible disclosures on political ads that contain synthetically-generated content that inauthentically depicts real people or events.
Arvind Narayanan and Sayash Kapoor, a professor and researcher, respectively at Princeton University, have argued more capable AI won't worsen the misinformation problem because misleading content is already relatively easy to produce and anyone who wants to produce it can already do so. Rather, misinformation is best tackled by content moderation by social media sites and other digital platforms.
Watermarking and provenance measures by AI developers are likely to be ineffective because malicious actors can easily access AI models that have been openly published, such as Meta’s Llama 2, says Jungherr. “I would argue that this is an attempt by these companies to avoid negative coverage,” he says. “I'm not necessarily sure that they expect that these tools will shift an election.”
Disclosures on political ads may face similar issues, says Sacha Altay, a researcher in the Digital Democracy Lab at the University of Zurich. “I think it's good that political ads have to disclose when they use generative AI,” says Altay. “The only thing is that maybe the bad actors who want to abuse generative AI will probably not disclose it, so I'm not exactly sure how they will enforce it.”
“In the end, I think it almost comes down to how politicians use the information ecosystem to gain power or to gain followers to gain votes, even if they lie and spread misinformation,” says Altay. “I don't think there's any quick fix.”
More Must-Reads from TIME
- How Donald Trump Won
- The Best Inventions of 2024
- Why Sleep Is the Key to Living Longer
- Robert Zemeckis Just Wants to Move You
- How to Break 8 Toxic Communication Habits
- Nicola Coughlan Bet on Herself—And Won
- Why Vinegar Is So Good for You
- Meet TIME's Newest Class of Next Generation Leaders
Write to Will Henshall at will.henshall@time.com