On an ordinary weekday in November, Bletchley Park plays host to a mixture of elderly pensioners and bands of unruly schoolchildren, visiting to learn about the codebreakers—including computing pioneer Alan Turing—who were based here during World War II, and helped the Allied Forces defeat the Nazis.
But this is no ordinary week, and these are no ordinary visitors.
On Wednesday and Thursday, delegates from 27 governments around the world, as well as the heads of top artificial intelligence companies, gathered for the world’s first AI Safety Summit at this former stately home near London, now a museum. Among the attendees: representatives of the U.S. and Chinese governments, Elon Musk, and OpenAI CEO Sam Altman.
The high-profile event, hosted by the Rishi Sunak-led U.K. government, caps a year of intense escalation in global discussions about AI safety, following the launch of ChatGPT nearly a year ago. The chatbot displayed for the first time—to many users at least—the powerful general capabilities of the latest generation of AI systems. Its viral appeal breathed life into a formerly-niche school of thought that AI could, sooner or later, pose an existential risk to humanity, and prompted policymakers around the world to weigh whether, and how, to regulate the technology. Those discussions have been taking place amid warnings not only that today’s AI tools already present manifold dangers—especially to marginalized communities—but also that the next generation of systems could be 10 or 100 times more powerful, not to mention more dangerous.
Bletchley Park was chosen by the U.K. government as the location for the summit due to its symbolism: it was one of the birthplaces of modern computing; the machines that were devised here to decode the Nazis’ Enigma Code formed the blueprints for early programmable computers. Michelle Donelan, the U.K.’s science and technology minister, opened the Summit on Wednesday speaking of her hope that delegates gathered for the summit would contribute to an achievement of similar magnitude, “pushing the boundaries of what is actually possible.”
Officials from around the world did not attempt to come to an agreement here on a shared set of enforceable guardrails for the technology. But Sunak announced on Thursday that AI companies had agreed at the Summit to give governments early access to their models to perform safety evaluations. He also announced that Yoshua Bengio, a Turing Award-winning computer scientist, had agreed to chair a body that would seek to establish, in a report, the scientific consensus on risks and capabilities of frontier AI systems.
“I am pleased to support the much-needed international coordination of managing AI safety, by working with colleagues from around the world to present the very latest evidence on this vitally important issue,” Bengio said in a statement.
Read More: The TIME100 Most Influential People in Artificial Intelligence
Sunak’s headline win from the Summit—that AI labs would give governments early access to their systems—was light on details. Sunak had already announced, in June, that the three leading frontier AI companies (OpenAI, Google DeepMind, and Anthropic,) had agreed to give the U.K. government “early or priority access” to their systems for safety purposes. Sunak did not elaborate on how the agreement reached on Thursday differed from this existing commitment, if at all. He also did not comment on whether governments would receive the source code, or so-called “weights,” of frontier models—or simply the ability to interact with them using an interface known as an API. (Sunak’s office did not immediately reply to a request for comment.)
Sunak also announced that the U.K.’s Frontier AI Taskforce would evolve into a permanent body tasked with carrying out safety evaluations: the U.K. AI Safety Institute.
Despite the limited progress, delegates at the event welcomed the high-level discussions as a crucial first step toward international collaboration on regulating the technology—acknowledging that while there were many areas of consensus, some key differences remain.
Kicking off the event on Wednesday, the U.K. government announced the “Bletchley Declaration” on AI, signed by 28 countries, including the U.S., U.K., China, and India, as well as the European Union. The declaration said AI poses both short-term and longer-term risks, affirmed the responsibility of the creators of powerful AI systems to ensure they are safe, and committed to international collaboration on identifying and mitigating the risks.
The U.K. government, as organizer of the Summit, has walked a fine line between communicating that it is serious about AI risks on one hand, while telegraphing to tech companies that it is open for business on the other. “For me, the biggest risk actually that we face, is the risk of missing out on all these incredible opportunities that AI can truly present,” Donelan told tech industry luminaries at a reception at Google DeepMind’s headquarters on the eve of the Summit. “If we actually terrify people too much, or if we shy away because we don’t grip these risks, then we won’t see the adoption in our NHS [National Health Service], we won’t see the adoption in our transport network, we won’t be able to utilize AI to tackle climate change or to support developing nations to tackle issues like food inequality. And that would be the biggest tragedy that we could imagine.”
The U.S., on the other hand, made several announcements this week that threatened to overshadow the U.K.’s claim to global leadership on AI safety. At a speech in London on Wednesday, Vice President Kamala Harris announced a sweeping set of U.S. actions, including the establishment of an American AI Safety Institute. Harris said the body would create guidelines for risk evaluations of AI systems, and develop guidance for regulators on issues like watermarking AI-generated material and combating algorithmic discrimination. Harris’s announcement followed an executive order signed by President Joe Biden on Monday, requiring AI companies notify the federal government when training potentially dangerous models, and share the results of safety tests before making them public.
Read More: Why Biden’s AI Executive Order Only Goes So Far
The British foreign secretary, James Cleverley, played down suggestions on Thursday that the U.S. had overshadowed the U.K. with its announcements. “This isn’t about hoarding, this is about sharing,” he told TIME. “This is something we want everyone involved in. It’s not exclusive, it’s inclusive.”
At Bletchley Park, police were out in force patrolling a secure temporary perimeter erected around the wartime code-breaking complex. Outside the front gate, a small coterie of protesters from the pressure group Pause AI were present on Wednesday, brandishing banners and placards. The group advocates for a temporary pause on the training of AI systems more powerful than OpenAI’s GPT-4, the successor to ChatGPT and the world’s most powerful publicly-available large language model. “Polling is increasingly showing that slowing AI development is popular with the public, as did the many messages of support we received from members of the public, young and old, today whilst we were protesting,” Gideon Futerman, one of the protesters, told TIME. “A pause is not just necessary, but it is also achievable.”
Inside, the main discussions were held behind closed doors, but some industry figures emerged to address the media. Mustafa Suleyman, the cofounder of Google DeepMind and now the CEO of startup Inflection AI, said at a press conference that he disagreed with those calling for a pause on the training of the next generation of AI systems, but said the industry may need to consider it as an option in the near future. “I don’t rule it out, and I think that at some point over the next five years or so, we’re going to have to consider that question very seriously,” he said. “I don’t think there is any evidence today that frontier models of the size of GPT-4 … present any significant catastrophic harms, let alone any existential harms. It’s objectively clear that there is incredible value to people in the world. But it is a very sensible question to ask, as we create models which are 10 times larger, 100 times larger, 1,000 times larger, which is going to happen over the next three or four years.”
An open letter in March calling for a six-month pause on training large AI systems was dismissed by many in industry as unenforceable—but some delegates suggested the Summit was taking steps toward building an international framework where such a decision could be enforced. Connor Leahy, CEO of the AI safety company Conjecture, who has been particularly vocal about what he says are serious existential threats posed by AI, told TIME on Wednesday he had been impressed by the caliber of discussions and the near-uniform agreement that collaboration to address risks was necessary. “Overall, I think the U.K. has done something really phenomenal here,” he said, praising the number of high-level attendees from both government and industry. “This is not the place where policy gets made in practice, this is the kind of place where the groundwork gets laid.”
Select members of civil society were invited to attend closed-door sessions with policymakers and technologists, although some of them chafed at what they said was insufficient representation. “If this is truly a global conversation, why is it mostly U.S. and U.K. civil society [in attendance]?” said Vidushi Marda, a delegate at the event from the non-profit REAL ML, and who is based in Bangalore, India. “Most of the consequential decisions are pretty opaque to us, even though we are in the room.”
Others accused the Summit of reflecting industry talking points. “There has been a complete industry capture of this conversation, and in many ways this summit reflects that,” says Amba Kak, the executive director of the AI Now Institute, a research group. “The context to all of this is that we’re seeing a further concentration of power in the tech industry and, within that, a handful of actors. And if we let industry set the tone on AI policy, it’s not enough to say we want regulation—because we’re going to see regulation that further entrenches industry interests.”
Kak said that the summit’s scope appeared to have shifted late in the process to include discussion of nearer-term risks, which she welcomed. “Early signals all suggested that the Summit was almost entirely focused on looking-out-at-the-horizon types of harms,” she said. “If there’s one take-away from the [Bletchley Declaration], it’s that there is a need to deal with the near-term and that is not disconnected from the long-term.”
Read More: The A to Z of Artificial Intelligence
While the Summit may have succeeded to some extent at bridging the divide between researchers warning of near- and long-term risks, a separate difference in opinion—between open source and closed-source approaches to AI research—was evident among many of the industry attendees. Advocates of more restricted AI research say that the dangers of advanced AI are too significant for the source code of powerful models to be freely distributed. The open-source community disagrees, saying that profit-driven companies monopolizing AI research is likely to lead to bad outcomes, and argues that open-sourcing models can accelerate safety research.
“The open source debate still feels like a challenging one, and I don't think the Summit, given it was addressing a range of topics, was really able to provide enough space to get into the details sufficiently to find greater consensus,” Jess Whittlestone, a delegate and head of AI policy at the Centre for Long-Term Resilience, a think tank, told TIME. “In the discussions I was in, there was quite a bit of agreement that we needed to go beyond a binary open/closed-source debate, and look at more nuanced solutions, but still differing perspectives on what that needed to look like and what should be prioritized.”
Read More: The Heated Debate Over Who Should Control Access to AI
The symbolism of the gathering at Bletchley Park, home of a wartime effort where great minds came together to safeguard life and liberty in the face of an existential threat, was not lost on many attendees. But if the Summit fails to deliver the desired results, an alternative historical comparison might prove a better metaphor. After the war, the celebrated code-breaking agency based in Bletchley evolved into GCHQ—the U.K.’s intelligence agency that, in partnership with the U.S. National Security Agency, conducted indiscriminate global mass surveillance programs—using technology not to safeguard citizens, but to systematically violate their rights.
“The mythology of Bletchley has been instrumented by successive governments to justify surveillance and increasing technological control, implying that these efforts spring from the same source as the U.K.'s anti-fascist technological endeavors during the second world war,” Meredith Whittaker, president of Signal, told TIME. “So it's not surprising that the current government's attempt to get close to the powerful U.S.-based AI industry would leverage and stretch this same mythology, hoping that the glow of the past can obscure the reality of the present.”
More Must-Reads from TIME
- Donald Trump Is TIME's 2024 Person of the Year
- Why We Chose Trump as Person of the Year
- Is Intermittent Fasting Good or Bad for You?
- The 100 Must-Read Books of 2024
- The 20 Best Christmas TV Episodes
- Column: If Optimism Feels Ridiculous Now, Try Hope
- The Future of Climate Action Is Trade Policy
- Merle Bombardieri Is Helping People Make the Baby Decision
Write to Billy Perrigo / Bletchley, U.K. at billy.perrigo@time.com