When the Biden White House was tasked with responding to the rapid changes in generative AI last year, Alondra Nelson led the charge. As the director of the White House Office of Science and Technology Policy (OSTP), Nelson oversaw the release of the Blueprint for an AI Bill of Rights last October. The document is not binding by law or enforceable, but lays a framework that she hopes both AI builders and policymakers will abide by in order to ensure that AI is a force for public good. “Fueled by the power of American innovation, these tools hold the potential to redefine every part of our society and make life better for everyone,” the document reads.

Nelson also hopes the 73-page document will spur Congress to draft and pass AI legislation as soon as possible. “It’s incredibly urgent,” she says. “We have a cautionary tale of not long ago with social media regulation, where we did not move quickly enough.”

Illustration by TIME; reference image courtesy of Alondra Nelson

Nelson came to White House with a knockout résumé: professor at Columbia and Yale, president and CEO of the nonprofit Social Science Research Council, and author of several acclaimed books on genetics, race, and medical discrimination. She brought the same rigor and attention to detail to the AI blueprint, which she formulated over the course of a year, with she and her team talking extensively with industry players, academics, high school students, and teachers. From those conversations, she identified a collection of best practices for industry players, including red teaming—stress-testing AI systems before they are publicly deployed—and continual audits.

These best practices, Nelson says, are all oriented toward making sure that AI actually serves the public as opposed to just being a boon for an eager tech industry. “The bottom line for the President is, how does this help families? How does this heal people?” she says. “Is this providing economic security and other forms of security for the American public? Is it helping people keep or gain jobs that are good and meaningful?”

Read More: How Two Supreme Court Cases Could Completely Change the Internet

Strikes are being waged in Hollywood in part over the future role of AI in filmmaking and its potential to take the jobs of both writers and actors. Similar battles are likely to unfold across many industries. But Nelson cautions that the worst fears about machines taking our jobs are often overblown. She points to radiology, a field that AI pioneer Geoffrey Hinton predicted in 2016 would be overtaken by AI within five years. Instead, there is a radiology labor shortage across the world.

“These sorts of proclamations are moments to think about the society we want,” Nelson says. “I often will push back against a kind of fait accompli approach to thinking about the relationship between jobs and AI. There are things that can be done to make it less disruptive.”

Nelson departed the White House in February, but still holds several influential posts. As a fellow at the Center for American Progress, she advises state lawmakers and members of Congress on AI policy. As a member of the Institute for Advanced Study’s AI working group, she liaises between the industry, policymakers, and civil society.

Nelson says that the upcoming 2024 elections in the U.S., E.U., U.K., South Africa, and beyond make AI regulation all the more urgent. AI is already being used for deepfake videos that target politicians, and could be weaponized by lobbying groups to disseminate falsehoods on a much larger scale. “Disinformation and misinformation are going to get a lot worse,” she warns. “We’ve got to strap in and do what we can to mitigate some of these things.”

But Nelson is confident that U.S. regulation will be passed in time. “We’ve had a moment in which Congress has really snapped to attention,” she says. “I’m optimistic that people are responding to this moment with the kind of gravity that it requires.”

More Must-Reads from TIME

Contact us at letters@time.com.

Sougwen Chung
Yann LeCun
Abeba Birhane
Audrey Tang