Matt Clifford

External Advisory Board Vice-Chair, U.K. AI Safety Institute

2 minute read

Historically, technologies with societal level risks—like nuclear weapons—were developed by governments. Officials had an inside view of how the technology was progressing. But today’s cutting-edge AI is being built by private companies.

Governments have protocols for communicating with each other, but “until last year they didn’t have that for OpenAI and Anthropic,” says Matt Clifford, the Vice-Chair of the U.K. AI Safety Institute’s advisory board. An entrepreneur on a first name basis with the leaders of the top AI firms and holder of various government roles–including the chair of the U.K.’s Advanced Research and Invention Agency (ARIA), the U.K.’s version of DARPA—Clifford believed he could bridge the gap between government and industry. 

In the spring of 2023, he helped design the Frontier AI Task Force, a government body, modeled on the U.K.’s Vaccine Task Force, to lead AI safety research. The new agency, which later became the AI Safety Institute, was the first of its kind, and has since been emulated by the U.S., Singapore, and Japan. In June 2023, the institute struck deals with several developers, who agreed to give priority access to their models for safety testing.

Clifford also led negotiations at the AI Safety Summit, a first-of-its-kind meeting of world leaders and tech CEOs to discuss ways of mitigating AI’s risks. After an intense two days of deliberations, 29 nations signed a declaration pledging to continue collaborating, and nine AI companies agreed to share their model access with government safety institutes like the U.K.’s for pre-deployment testing. Clifford is now leading work on the new U.K. Labour government’s plan for leveraging AI to drive economic growth.

More Must-Reads from TIME

Contact us at letters@time.com