Elizabeth Kelly

Director, United States Artificial Intelligence Safety Institute

5 minute read

Last November, at the world’s first AI Safety Summit, both the British and American governments announced the formation of AI safety institutes, intended to test these cutting-edge systems for potential risks.

Appointed to lead America’s AI Safety Institute (AISI) was Elizabeth Kelly, a lawyer by training, who has been pivotal in shaping the Biden administration’s approach to technology policy. She was one of the lead drafters of Biden’s AI executive order.

As new and increasingly capable AI models continue to be released, Kelly and her team, which includes top computer scientists, ethicists, and anthropologists, are on the frontlines helping the government discern which systems are safe. The U.S. AISI’s vision is ambitious: to advance the science, practice, and adoption of AI safety, addressing a range of risks in the process. “Our view is that safety enables trust, which enables adoption, which enables innovation,” Kelly says.

TIME spoke with Kelly to understand how she’s approaching the task ahead. 

(This interview has been condensed and edited for clarity.)

TIME: Part of the institute's job is to address what exactly ‘AI safety’ means. What does the term mean to you?

Elizabeth Kelly: I'd say that we take a really broad view of AI safety. We are focused on the risks of AI, both in the here and now, in terms of potential harms to privacy, synthetic content harms, and what that means for millions of people; as well as the risks to national security and to public safety, as outlined in the executive order.

We're very much aware that the risks are evolving as AI evolves. We view our job as advancing the science of AI safety, and a large part of that is really understanding the capabilities that are emerging, the risks that are emerging, and keeping our eye on the horizon.

Your strategy document states that “safety is key to unlocking innovation.” Do you think that safety and innovation are in tension?

I really don't. Our view is that safety enables trust, which enables adoption, which enables innovation. And we all have a shared goal in making sure that we're able to fully enjoy the transformative benefits that AI offers, like individualized education, new drug discovery and development, early detection of illnesses—I could go on. But to make sure that that happens, we really need to ensure that we are seeing safe AI development. Enabling safe innovation is our North Star here at the Safety Institute.

There’s been prior reporting about NIST’s offices being in pretty rough shape—things like mold, leaks, and pests. Do you have what you need to get your work done at AISI?

We absolutely have the resources that we need to fulfill our mandate under the executive order and to really start our work advancing the science of AI safety. That includes doing fundamental research, rigorous testing, and issuing reliable guidance on topics ranging from synthetic content to safety and security. Obviously, there's more that Congress can do, but we are clear eyed in the size of the task and confident that we're able to meet the moment. We're really building an enduring capacity here.

I think we are able to do a lot with our budget because of the talent that we're attracting, alongside the resources that we have in companies, in civil society, in academia. The fact that the U.S. is a legal leader here is a tremendous force multiplier for the Safety Institute, and we are supported in our work by a 280 member consortium drawing from civil society, academia, and the frontier model labs.

Could you share more about your collaboration with the U.K. institute — how does that work in practice?

We're working with allies and partners across the globe, as you reference. We have signed a memorandum of understanding with the U.K. which enables us to work with them on different joint testing exercises and other ways to really pool our resources and learn from the work that each other is doing. 

We think this is vitally important. This is part of why we announced at the Seoul Safety Summit back in May that we're going to be launching this network of international safety institutes. We want to pull together like-minded allies and partners from across the globe, so that we can learn from the work that each other is doing. We don't want to duplicate each other’s work, but we want to be able to stand on each other's shoulders. We hope to move towards more aligned and interoperable standards, similar testing regimes, so that we're not hampering innovation but instead enabling it.

How do you think a Trump presidency would affect AISI’s work?

I really view the science of AI safety as a fundamentally bipartisan issue. I think we're all united as Americans in wanting to prevent the malicious use of AI that could harm national security and public safety, and in ensuring that America continues to lead the world in AI innovation, and that's the work that we're doing at the Safety Institute. It's why there is bipartisan legislation pending that would formally authorize the Safety Institute. It's why I've heard from senators on both sides of the aisle that “we gave you 10 million, but it should have been 100 million,” and I think we're really well positioned at NIST to continue this work.

NIST has a long and storied history, with multiple Nobel laureates and an unbelievable track record and setting standards for technology. Our AI risk management framework, which guides organizations on the responsible use of AI, is used by countries and companies and institutions across the globe.

We're really building an enduring capacity here at the AI Safety Institute to make sure the state science of AI safety keeps up with the pace of AI development.

More Must-Reads from TIME

Contact us at letters@time.com