This post was updated at 1:16 p.m., September 7th 2023
As a former tech entrepreneur and an investor, Ian Hogarth made his fortune betting on machine-learning companies. In the process, he got a front-row seat to rapid advancements that weren’t always apparent to the public. “It made me feel quite uncomfortable,” says Hogarth, who has invested in more than 50 AI companies and for the past six years has co-authored an annual “state of AI” report. “When I would spend time with the people building these AI companies, I would come to understand things about where the world was headed that just weren’t really visible in any public setting.”
In April, Hogarth authored a viral piece for the Financial Times, in which he argued that the corporate race toward “God-like” AI posed a risk that could, in the worst-case scenario, “usher in the obsolescence or destruction of the human race.” Two months later, the U.K. government announced it had appointed the 41-year-old Brit as the head of the U.K.’s new AI safety initiative, now known as the Frontier AI Taskforce. The £100 million ($126 million) investment—the largest by any state into the field of AI safety, according to Hogarth—is part of a wider push by the U.K. to cast itself as a leader in the move toward global norms around the use and regulation of AI systems. In November, the country will host a first-of-its-kind AI summit convening international policymakers with top CEOs and safety researchers.
Under Hogarth’s guidance, the U.K.’s AI Taskforce aims to build capacity inside government to do the kinds of safety research currently possible only in industry. The Taskforce is “the most significant effort globally, in any nation-state, at tackling AI safety,” Hogarth tells TIME, in an interview at the Taskforce’s headquarters, located in the same Westminster building as the U.K. Treasury. “We are putting researchers in the Taskforce working on AI safety on a similar footing to those working on AI safety in tech companies.”
Although his FT essay warned of the longer-term risks of superintelligent AI, Hogarth says his Taskforce will prioritize safety research into near-term risks. “There is an enormous amount of capital and energy being poured into making more capable coding tools,” he says by way of example. “Those same tools will augment the potential for the development of various kinds of cyberattacks. That’s an issue that is growing in risk, and the kind of thing that we’re looking into.” Biosecurity—given the risks of AI making it easier to design and synthesize dangerous pathogens—is another area that will be a focus, he says.
While £100 million sounds like a lot of money, it pales in comparison to the budgets of the leading AI companies. OpenAI raised $300 million just at its last fundraising round in April, putting its total valuation at $28.7 billion. Google spent $39.5 billion on research and development in 2022. And salaries for world-leading machine-learning researchers can run to several million dollars per year. Hogarth acknowledges it’s impossible for the U.K. to compete with the tech giants when it comes to training “foundation” AI models at the frontier of capability—“people are spending $100 million just on a single training run”—but he believes it’s still possible on a government budget to meaningfully contribute to safety research, which is “much less capital-intensive.”
Another reason safety research is still mostly done inside AI companies is that AI labs tend to guard the confidential “weights” and training datasets of their most powerful models—partly because they are easily replicable trade secrets, but also because of real concerns about the dangers of their proliferation. To do meaningful safety work at the same standard as industry researchers, Hogarth’s Taskforce will need to secure access. It’s unclear whether it has. In June, Prime Minister Rishi Sunak announced that three world-leading AI labs—OpenAI, Google DeepMind, and Anthropic—had committed to give the U.K. “early or priority access” to their models for safety-research purposes. But in his interview with TIME two months later, Hogarth declined to share details about what type of access, if any, the government has secured.
When it comes to attracting talent, though, Hogarth says neither budget nor Big Tech competition has been a barrier. “There’s a community of researchers who’ve been waiting for an opportunity to do this kind of public service,” he says. Notably, Hogarth is working without pay, because he sees it as a “critical public service.” (In his role as a partner at the investment fund he co-founded, Plural, he has also agreed to divest from his stakes in AI companies that are “building foundation models or foundation-model safety tools,” according to the U.K. government.) When it comes to staffing up the rest of the Taskforce, Hogarth says lower salaries haven’t dissuaded many formidable machine-learning researchers from joining. “I’ve found it really quite inspiring to see the extent to which people are willing to take a real pay cut to come and work in the public sector and contribute to this mission.”
- Why Trump’s Message Worked on Latino Men
- What Trump’s Win Could Mean for Housing
- The 100 Must-Read Books of 2024
- Sleep Doctors Share the 1 Tip That’s Changed Their Lives
- Column: Let’s Bring Back Romance
- What It’s Like to Have Long COVID As a Kid
- FX’s Say Nothing Is the Must-Watch Political Thriller of 2024
- Merle Bombardieri Is Helping People Make the Baby Decision