There’s a good chance that whatever AI regulations emerge across the world in the next few years, Anna Makanju will have left her fingerprints on them. Makanju, 47, is the vice president of global affairs at OpenAI, which has positioned itself as one of the industry’s foremost drivers of good-faith regulation. The company is already a leader in the AI sector—with ChatGPT becoming a household name since launching in November 2022. Over the past year, Makanju and OpenAI CEO Sam Altman have been traveling the world, meeting with leaders, and advising them on how to wrangle with this rapidly emerging technology. “Everyone is trying to strike this balance of making sure that innovation is still possible, and that it has the guardrails that are needed to make sure it goes well,” Makanju says.
More from TIME
Makanju, who joined OpenAI in September 2021, has a wealth of policy experience from which to draw. She served as a policy adviser to Vice President Joe Biden under the Obama Administration and has also worked at the U.N., NATO, and Facebook. But questions still linger about how much regulation OpenAI is actually willing to adhere to, despite the company’s claims of support for guardrails for the AI sector. Makanju addressed those criticisms and talked about paths forward in an interview with TIME. (This interview has been condensed and edited for clarity.)
TIME: What have you learned in your tour of meeting with world leaders about AI around the world?
Anna Makanju: One of the most surprising things was how similarly people were really thinking about it. Basically, everyone was really excited about maximizing the benefits for their societies and ensuring that there are guardrails. People took different views of what that means or how that should be done. But overall, that was really what we heard everywhere in the world.
What did you learn from working on election integrity at Facebook that is applicable at OpenAI?
Even before Facebook, I was at the White House. We were there during the Russian election-interference activity in 2016. Even before then, we were thinking through the most robust approaches to this kind of misinformation. And a lot of the mitigations for that kind of interference are things that we’ve been thinking about from when I started.
Of course, there are a lot of mitigations built in in terms of what people are permitted to use the tools for. We want to be super proactive about these kinds of problems from the very beginning.
Many tech and crypto leaders have tried to cozy up to governments so that they essentially become allowed to regulate themselves. Why should we believe that Sam Altman and OpenAI are sincere in their calls for regulation?
Regulation that ensures this technology actually benefits everyone is an essential piece of us fulfilling our mission. Our approach, and my approach, from taking this job almost two years ago, was to be as collaborative as possible.
Sam’s [Senate testimony] may be the most visible example of this, but we’ve been doing this from the beginning. We’ve been coming to regulators and policymakers and saying, “This is where we think the technology is going. We really want to be a technical partner for you in understanding this technology as well as possible so that you can do the work that you are trying to do, which is to create the rules and infrastructure for this technology that will ensure that it actually goes well for all of us.”
Tell me about why you decided to help form the Frontier Model Forum along with Google, Microsoft, and Anthropic?
Right now, we don’t even have a common language around a lot of the specific risks and common safety interventions across the industry. All of the labs are working on these things. Policymakers have asked us to have an industry-wide approach for safety best practices, where we really bring our best technical expertise to bear to identify the best approaches.
What governments seem to be ahead of the curve in terms of drafting sensible, strong legislation?
The E.U.’s been working on this since 2018. They do have this really remarkable depth of expertise and thoughtfulness on the issues. Frankly, we are rooting for the E.U. AI Act to succeed.
On the U.S. side, [Senator] Chuck Schumer has been leading this effort and Senator [Richard] Blumenthal as well. They’ve really taken the approach of learning as much as possible before proposing something, which makes a ton of sense because this technology is so complex. They’ve included a super wide range of stakeholders: us, the industry, civil society, affected communities. So their approach seems really robust.
India has been thinking about it because it’s been technology that has helped advance the country economically. They really want to invest in thinking about what are the guardrails to make sure that we can leverage this as well to lift up more people.
So the approaches have been different. But even just since GPT came out, the level of sophistication for policymakers around the world has risen super quickly.
- Introducing the 2024 TIME100 Next
- The Reinvention of J.D. Vance
- How to Survive Election Season Without Losing Your Mind
- Welcome to the Golden Age of Scams
- Did the Pandemic Break Our Brains?
- The Many Lives of Jack Antonoff
- 33 True Crime Documentaries That Shaped the Genre
- Why Gut Health Issues Are More Common in Women