• Ideas
  • Technology

The U.S. Isn’t Ready for the New Age of AI-Fueled Disinformation—But China Is

6 minute read
Ideas
Beauchamp-Mustafaga is a policy researcher at the nonprofit, nonpartisan RAND Corporation, where he focuses on Chinese strategies for social media manipulation, and Marcellino is a senior behavioral scientist who works on AI and disinformation issues at the RAND Corporation.

In 2019, a Chinese researcher named Li Bicheng laid out his ideas about manipulating public opinion using AI. A network of “intelligent agents”—an army of fake online personae, controlled by AI—could act just realistically enough to shape consensus on issues of concern to the Chinese Communist Party, such as its handling of the COVID-19 pandemic. Just a few years earlier, Li had written in other articles that China should improve its ability to conduct “online information deception” and “online public opinion guidance.”

Li is no outlier. In fact, he is the ultimate insider, with a long research career at the People’s Liberation Army’s top information warfare research institute. His vision of using AI to manipulate social media was published in one of the Chinese military’s top academic journals. He is connected to the PLA’s only known information warfare unit, Base 311. His articles, therefore, should be viewed as a harbinger of a coming AI-assisted flood of Chinese influence operations across the web. 

As Meta recently disclosed in its quarterly adversarial threat report, Western internet platforms are already drowning in pro-Beijing content posted by groups linked to the Chinese government. According to the Meta report, more than half a million Facebook users followed at least one of these fake accounts in the broader Chinese network—which relied on click farms based in Vietnam and Brazil to boost its reach. The report also states that the Chinese network has bought about $3,000 worth of advertisements to further promote its posts. This effort, however, appears to still be ultimately run by humans and had marginal real-world results. A recent State Department report on China’s influence operations reinforces this point.

But generative AI offers the potential to transform such efforts into something far more effective, and thus far more dangerous to the U.S. and other global democracies. And our research shows that China is primed to adopt this new technology. Last month, Microsoft reported some China-affiliated actors who it tracks began using AI-generated images in March. This validates our concerns, and we expect more of this to come.

Now, thanks to generative AI, the impact of China’s social media manipulation will be both greater in sheer volume and much cheaper, and will likely have better, more believable content. While the traditional approach requires hiring humans to work in content farms to create or otherwise post content and then spending money to boost and promote it across social media, with generative AI, the cost is relatively fixed, and the scope is highly scalable: build it once, and let it populate the web with content.

The cost of building such a model is already incredibly cheap and—as with so much of technology—will only get cheaper. As Wired recently reported, a researcher going by the alias Nea Paw was able to create a fully autonomous account that posted across the internet, with links to articles and news outlets, even citing specific journalists—except that all of it was fake, created entirely using AI. Paw did this with publicly available, off-the-shelf AI tools. It cost him just $400. 

This sort of generative AI, which acts much more like people, and not like bots, offers the CCP—and plenty of other bad actors, such as Russia and Iran—the potential to fulfill longstanding desires to shape the global conversation.

In May 2021, Chinese General Secretary Xi Jinping reiterated his party’s focus on this lofty goal, during his remarks at the CCP Politburo’s monthly Collective Study Session. There, he said that China should “create a favorable external public opinion environment for China’s reform, development and stability,” in part by developing more-compelling propaganda narratives and better tailoring content to specific audiences. Xi also emphasized that since he came to power, in 2012, Beijing has improved the “guiding power of our international public opinion efforts.” In other words, Xi is pleased with how much China is already influencing global public opinion but he thinks the CCP has more work to do.

Xi has also for years now been talking about technology as a way to achieve his desires. In an earlier Politburo Collective Study Session, in 2019, Xi said that it was necessary to study the application of AI in news collection, production, distribution, and feedback, in order to improve the ability to guide public opinion. The broader Party-state apparatus has already moved to realize Xi’s vision, including establishing “AI editorial departments.” 

Chinese military researchers have been working to create what they sometimes call “synthetic information” since at least 2005. Such information can be used for many purposes, including generating “explosive political news” about adversaries. For example, China was accused in 2017 of a disinformation campaign that claimed Taiwan’s government was going to strictly regulate religious services, which created a political firestorm on the island. 

Chinese military researchers have routinely complained the PLA lacks the necessary amount of staff with adequate foreign-language skills and cross-cultural understanding. Now, however, generative AI offers the PLA tools to do something it could never do before, which is to manipulate social media with at-or-near-human-quality content, at scale. 

There are steps both social media platforms and the U.S. government can and should be taking to begin to mitigate this threat. But all such strategies must start from the reality that generative AI is already ubiquitous and unlikely to ever be universally regulated.

Still, social media platforms should intensify their efforts to crack down on existing inauthentic accounts spreading disinformation and make it harder for malign actors—foreign or domestic—to open new ones. The U.S. government, meanwhile, should consider whether the recent export controls for advanced hardware implemented against China and Russia can be improved during forthcoming revisions to better capture hardware required to train the large language models at the heart of AI. Once they are developed, the models get much harder to regulate

It is vital that the U.S. government and social media platforms recognize this threat and work together to address it immediately, particularly before the 2024 elections.

More Must-Reads From TIME

Contact us at letters@time.com

TIME Ideas hosts the world's leading voices, providing commentary on events in news, society, and culture. We welcome outside contributions. Opinions expressed do not necessarily reflect the views of TIME editors.