Rishi Sunak Wants the U.K. to Be a Key Player in Global AI Regulation

8 minute read

During Prime Minister Rishi Sunak’s recent visit to Washington D.C., as he announced that the U.K. would host the first global summit on AI regulation later this year, he bristled in response to a reporter’s question about whether the “midsize country” could naturally lead the debate, given that the E.U. is close to passing a landmark AI bill.

“That midsize country happens to be a global leader in AI,” he said. “You would be hard-pressed to find many other countries other than the U.S. in the Western world with more expertise and talent in AI.”

The Prime Minister’s response revealed the dilemma he now faces in positioning the U.K. as a key player in reining in AI’s potential negative consequences without stifling innovation, amid growing fears around generative artificial intelligence.

Read More: The A to Z of Artificial Intelligence

Following the U.K.’s departure from the European Union, experts say Sunak is attempting to carve out a pivotal role to help keep the country globally relevant by playing the role of an “honest broker” between the different regulatory approaches of the E.U. and the U.S. when it comes to AI. Last week, Joe Biden confirmed the U.S. would take part in the global summit focusing on AI safety that the U.K. will host later this year, which Sunak hopes will help mitigate risks through “internationally coordinated action.”

“Fundamentally, the government is trying to articulate a middle way forward between the very robust regulatory approach to the E.U. is converging upon and the more light-touch approach in Washington,” says Mujatba Rahman, the managing director of the Eurasia Group.

During a speech at London Tech Week on Monday, Sunak told industry leaders that he wants to “make the U.K. not just the intellectual home, but the geographical home of global AI safety regulation.” The UK government has invested over £100 million in a Foundation Model Taskforce created to lead on AI safety domestically, but Sunak now hopes the country can also develop a “global architecture” for regulation.

Read More: AI Is as Risky as Pandemics and Nuclear War, Top CEOs Say, Urging Global Cooperation

Until recently, the U.K. government’s approach to the regulation of AI was more laissez-faire. In a March white paper on AI, the government’s science, innovation and technology secretary, Michelle Donelan, wrote that AI had delivered “fantastic social and economic benefits for real people.” Yet a recent survey suggests that almost 60% of the country would like to see the U.K. regulating the use of AI to help safeguard jobs.

The Conservative government’s pro-innovation approach has seen the U.K. government investing over £2.5 billion in AI since 2014, and in turn, working with leading AI firms based in London, according to the white paper. “We’re working with the frontier labs—Google DeepMind, OpenAI, and Anthropic,” Sunak told reporters.

Read More: Why Microsoft’s Satya Nadella Doesn’t Think Now Is the Time to Stop on AI

Notably, the white paper suggested the government favored the creation of light-handed sector-specific regulators over an independent regulatory body: “New rigid and onerous legislative requirements on businesses could hold back AI innovation and reduce our ability to respond quickly and in a proportionate way to future technological advances,” it said.

Until he stepped down as Prime Minister last year, Boris Johnson had touted a post-Brexit Britain as being more business-friendly by cutting “E.U. red tape” and reducing onerous regulation, one of the core arguments in the campaign to leave the bloc.

Unlike some other businesses, AI companies have been seeking more input from governments. After meeting with senior executives in the industry, Prime Minister Sunak’s comments struck a different tone to the white paper and he began to acknowledge its “existential risks”: “In recent weeks, [Sunak] has been somewhat convinced that AI governance needs to be taken seriously and that there are risks here,” Jess Whittlestone, the head of AI Policy at the Centre for Long-Term Resilience, says. “There’s some genuine concern there.”

In March, a group of AI experts raised the alarm about OpenAI’s ChatGPT and the potential risks it posed to society, calling for a six-month pause on new developments. The following month, E.U. lawmakers warned that AI technologies were developing more rapidly than expected, urging world leaders to find ways to control AI. And last month, an open letter that was published by the nonprofit Center for AI Safety and signed by hundreds of AI executives, researchers, and other tech figures—including the CEOs of big AI firms OpenAI and DeepMind—stated that “mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”

Read more: Column: Pausing AI Developments Isn’t Enough. We Need to Shut it All Down

The letter’s authors suggested an “international” approach similar to the U.N.-brokered treaty on the non-proliferation of nuclear weapons, which they say would seek to contain what’s already out there through dedicated regulatory authorities, public funding for safety research, and oversight and tracking of powerful systems.

Sunak’s sudden pivot to a more hands-on approach to regulating AI comes as the E.U. develops the world’s first sweeping law on the technology after a G7 meeting last month. Member states agreed to create the ‘Hiroshima AI process,’ a ministerial forum designed to discuss and adopt risk-based AI rules. The Act legislates the technology’s use in facial recognition, biometric surveillance, and other AI applications. However, it has also prompted concern about how AI companies will comply. OpenAI’s CEO Sam Altman told reporters in London last month that his company could “cease operating” in the E.U. if it was unable to comply with the provisions of the new law.

Read More: OpenAI Could Quit Europe Over New AI Rules, CEO Sam Altman Warns

During an event hosted by former U.K. Prime Minister Tony Blair’s Institute of Global Change (TBI) on Tuesday, Nathan Benaich, the co-author of the annual State of AI report, which analyzes the most interesting developments in the industry, acknowledged that the U.K. had lost oversight on decisions around scaling up AI that it previously received from having a presence in the European Parliament. That also puts pressure on U.K. resources.

As a result, given that the U.K. is home to prominent AI labs, some of the responsibility to oversee has “come back to us in Westminster, but our capacity hasn’t necessarily increased as a consequence of that,” he said, adding that the U.K. Parliament needs to be comfortable in building its “capacity for proportionate checks and balances in the system.”

While the E.U. appears to be setting a precedent for regulation, Whittlestone tells TIME there’s an opportunity for the U.K. to distinguish itself. For example, unlike the E.U.—which is more focused on how AI might be used in specific products and on product regulation—the government could think “more afresh about preserving and supporting innovation” while being “careful and stringent” about where AI poses the highest risk.

Read More: Column: The Darwinian Argument for Worrying About AI

However, the Prime Minister’s attempt to lay the groundwork for becoming a regulatory home for AI has so far mostly been influenced by industry executives, Whittlestone says. “Having independent expertise around these concerns is so important to ensure democratically legitimate regulation, and so far, that has felt like it was really missing.”.

Benedict Macon-Cooney, TBI’s chief policy strategist, echoed these sentiments on Tuesday: “Whitehall needs to be much more comfortable about bringing in those kinds of experts that have been at the forefront of building [AI] as well as people from civil society and academia to make sure you get that right kind of balance of expertise always feeding into regulation,” he said.

“Otherwise, you’re just going to get bad advice,” he warned.

—With additional reporting by Billy Perrigo.

More Must-Reads from TIME

Write to Astha Rajvanshi at astha.rajvanshi@time.com