As delegates from around the world and leaders from the tech industry gathered in the U.K. for the first ever AI Safety Summit, there appeared to be harmony between officials of historical rivals China and the U.S. On Wednesday, U.S. Secretary of Commerce Gina Raimondo and Chinese Vice Minister of Science and Technology Wu Zhaohui shared a stage at the opening plenary of the U.K. AI Safety Summit.
Later that day, the U.S. and China were two of 29 countries to sign the Bletchley Declaration, which affirmed the risks AI poses and commits to international cooperation to identify and mitigate those risks, a crucial initial step to establishing regulation in the future.
But simmering beneath these shows of cooperation is an undercurrent of increasing tension between the two AI superpowers. Tech supremacy has been a hallmark of the tensions between the U.S. and China in recent years. In 2017, on the heels of an impressive breakthrough in artificial intelligence by Google Deepmind, China made AI progress a priority with its New Generation AI Development Plan. The plan set a timetable for the country to achieve certain milestones, including achieving “major breakthroughs” by 2025 and becoming a global leader in AI by 2030.
The launch of AI chatbot ChatGPT nearly a year ago has only sharpened the global focus on the technology. And concern in Washington that China could jump ahead on AI has led to restrictions on Chinese access to U.S. technology that could help its progress. On Oct. 17, the U.S. Department of Commerce announced a new set of restrictions that prevent the sale and resale of advanced AI chips and chip manufacturing equipment to China, updating rules imposed a year earlier. In response, China’s foreign ministry accused the U.S. of violating the principles of competition and a market economy, according to Reuters.
While experts agree that the U.K. Summit was a small step towards joint action, some expressed concern that tensions between the two countries could still prevent important global regulations from being passed in the future.
“I think it was so important that China was invited to this summit because China, alongside the U.S., is one of the two major players in developing some of these frontier AI models,” says Jeffrey Ding, a professor at George Washington University whose research focuses on China’s technological capabilities.
China’s inclusion in the Summit was far from guaranteed. U.K. Prime Minister Rishi Sunak was criticized by his predecessor Liz Truss for inviting China, but he defended his decision in a speech delivered the week before the Summit on the grounds that “there can be no serious strategy for AI without at least trying to engage all of the world’s leading AI powers.“ Speaking with Elon Musk after the Summit, Sunak said his view was that “if you’re going to try and have a serious conversation you need to [invite China].”
China’s equivalent of ChatGPT, the ErnieBot produced by the company Baidu, still lags behind in terms of its capabilities. Ding says that even when using Chinese language prompts, ChatGPT sometimes outperforms ErnieBot. However, this could change soon since Chinese tech companies are investing heavily in catching up on these technologies, according to Ding.
China is also a leading player when it comes to AI surveillance technologies, says Sihao Huang, a researcher at Oxford University who specializes in AI governance.
“China is doing sort of a lot of cutting edge research on computer vision, which it has been using for conducting surveillance at home, perhaps somewhat counter to the values that they sign up to in a lot of international agreements,” says Huang. “It’s also doing a lot of the cutting edge research on audio processing and robotics.”
Like their Western counterparts, some Chinese academics and politicians are becoming increasingly concerned about the potential for danger from the powerful technology.
Multiple Chinese academics, including one of China’s most famous computer scientists, Andrew Yao, co-authored a paper on Oct. 26 calling for tighter regulation of advanced artificial intelligence.
Read more: AI Experts Call For Policy Action to Avoid Extreme Risks
The move surprised some observers because concerns about risks posed by advanced AI are less commonly expressed in China than they are in the West, says Bill Drexel, an associate fellow at military affairs think tank the Center for a New American Security, who notes that petitions and demands are not typically well-received by the Chinese authorities. “To see that, alongside a high profile diplomatic summit, is very interesting,” says Drexel. “I wouldn't be surprised if they had already cleared what they were going to do with some officials or something.”
Official sign-off on the paper could signal that Chinese officials are concerned about risks from advanced AI, or it’s possible that participating in AI safety discussion benefits Beijing in other ways, at the very least buying China some time to work on its own AI development, Drexel says.
Vice Minister Zhaohui’s remarks at the opening plenary in the U.K. hinted at emerging tensions between the U.S. and China, says Drexel. First, Zhaohui defended the open release of AI models, an approach that has historically been the norm and from which China benefits but that some in the West are beginning to move away from amid concerns that open-release policies might allow misuse of the most powerful AI models. Second, Zhaohui stated that “all nations have the right to develop and use artificial intelligence technology,” alluding to the U.S. chip export restrictions.
Whether or not these tensions worsen, Drexel believes cooperation between the two countries is unlikely. “You really miss the forest for the trees if you think that the U.S. and China are coming together on AI from this summit,” he says. “The reality is we've declared something close to economic war on China, particularly on artificial intelligence, by not just restricting the export of these ultra-advanced semiconductors, but also then updating the order to make them more restrictive just a few weeks ago.”
Read more: What to Know About the U.S. Curbs on AI Chip Exports to China
Despite these tensions, Robert Trager, co-director of the Oxford Martin AI Governance Initiative argues that the U.S. and China can cooperate on common interests without a transformation of their overall relations, in a similar way to how the U.S. and then Soviet Union agreeing to prevent the spread of nuclear weapons under the Nuclear Non-Proliferation Treaty of 1968. “The non-proliferation regime is a great example of that. No one would say that the U.S. and the Soviet Union had good relations,” says Trager, who is also international governance lead at the Centre for the Governance of AI.
Drexel is less optimistic of such cooperation with China, saying even narrow cooperation on shared issues may prove difficult, as has been the case with diplomacy relating to other global concerns. “You talk to American diplomats, and a very common concern with China is that we try to separate out issues that we think are common concerns, especially climate, but also other tech safety issues like space debris, and biological risks, and so on,” says Drexel. “The perception on the American side, at least, is that China chronically subordinates these kinds of common interest issues to their broader geopolitical maneuvering vis-à-vis America in such a way that's deeply frustrating.”
It’s possible that China would be more likely to cooperate on AI safety if its leaders believed that keeping pace with American AI development was infeasible due to export restrictions. That might incentivise them to push for stricter international safety measures to hinder U.S. AI development, says Drexel.
While Washington has favored reserving access to advanced AI for its allies, using export restrictions against rivals and then coordinating with the smaller, more tightly-allied group through forums such as the G7, which recently announced its own AI code of conduct, Trager says this strategy may not be successful long term. “Right now, I think there is a lot of hope in the United States government that the export restriction model will be enough when it comes to governance,” he says. “And I think there are real reasons to question that.”
The AI Safety Summit will reconvene in South Korea in six months and then in France in a year. While this week’s gathering hasn’t transformed U.S.-China relations, future Summits might serve as a useful forum where international cooperation could develop, complementing the G7 and bilateral talks between the U.S. and China, says Trager. “I think it's a good thing to have multiple irons in the fire.”
More Must-Reads from TIME
- Introducing the 2024 TIME100 Next
- The Reinvention of J.D. Vance
- How to Survive Election Season Without Losing Your Mind
- Welcome to the Golden Age of Scams
- Did the Pandemic Break Our Brains?
- The Many Lives of Jack Antonoff
- 33 True Crime Documentaries That Shaped the Genre
- Why Gut Health Issues Are More Common in Women
Write to Will Henshall at will.henshall@time.com