Artificial intelligence has been a tricky subject in Washington.
Most lawmakers agree that it poses significant dangers if left unregulated, yet there remains a lack of consensus on how to tackle these concerns. But speaking at a TIME100 Talks conversation on Friday ahead of the White House Correspondents Dinner, a panel of experts with backgrounds in government, national security, and social justice expressed optimism that the U.S. government will finally “get it right” so that society can reap the benefits of AI while safeguarding against potential dangers.
“We can't afford to get this wrong—again,” Shalanda Young, the director of the Office of Management and Budget in the Biden Administration, told TIME Senior White House Correspondent Brian Bennett. “The government was already behind the tech boom. Can you imagine if the government is a user of AI and we get that wrong?”
Read More: A Call for Embracing AI—But With a ‘Human Touch’
The panelists agreed that government action is needed to ensure the U.S. remains at the forefront of safe AI innovation. But the rapidly evolving field has raised a number of concerns that can’t be ignored, they noted, ranging from civil rights to national security. “The code is starting to write the code and that’s going to make people very uncomfortable, especially for vulnerable communities,” says Van Jones, a CNN host and social entrepreneur who founded the Dream Machine, a non-profit that fights overcrowded prisons and poverty. “If you have biased data going in, you're going to have biased decision-making by algorithms coming out. That's the big fear.”
The U.S. government might not have the best track record of keeping up with emerging technologies, but as AI becomes increasingly ubiquitous, Young says there’s a growing recognition among lawmakers of the need to prioritize understanding, regulation, and ethical governance of AI.
Michael Allen, managing director of Beacon Global Strategies and Former National Security Council director for President George W. Bush, suggested that in order to address a lack of confidence about the use of artificial intelligence, the government needs to ensure that humans are at the forefront of every decision-making process involving the technology—especially when it comes to national security. “Having a human in the loop is ultimately going to make the most sense,” he says.
Asked how Republicans and Democrats in Washington can talk to each other about tackling the problems and opportunities that AI presents, Young says there’s already been a bipartisan shift around science and technology policy in recent years—from President Biden’s signature CHIPS and Science Act to funding for the National Science Foundation. The common theme behind the resurgence in this bipartisan support, she says, is a strong anti-China movement in Congress.
“There's a big China focus in the United States Congress,” says Young. “But you can't have a China focus and just talk about the military. You've got to talk about our economic and science competition aspects of that. Those things have created an environment that has given us a chance for bipartisanship.”
Allen noted that in this age of geopolitical competition with China, the U.S. government needs to be at the forefront of artificial intelligence. He likened the current moment to the Nuclear Age, when the U.S. government funded atomic research. “Here in this new atmosphere, it is the private sector that is the primary engine of all of the innovative technologies,” Allen says. “The conventional wisdom is that the U.S. is in the lead, we’re still ahead of China. But I think that's something as you begin to contemplate regulation, how can we make sure that the United States stays at the forefront of artificial intelligence because our adversaries are going to move way down the field on this.”
Congress is yet to pass any major AI legislation, but that hasn’t stopped the White House from taking action. President Joe Biden signed an executive order to set guidelines for tech companies that train and test AI models, and has also directed government agencies to vet future AI products for potential national security risks. Asked how quickly Americans can expect more guardrails on AI, Young noted that some in Congress are pushing to establish a new, independent federal agency that can help inform lawmakers about AI without a political lens, offering help on legislative solutions.
“If we don’t get this right,” Young says, “how can we keep trust in the government?”
TIME100 Talks: Responsible A.I.: Shaping and Safeguarding the Future of Innovation was presented by Booking.com.
More Must-Reads from TIME
- Why Trump’s Message Worked on Latino Men
- What Trump’s Win Could Mean for Housing
- The 100 Must-Read Books of 2024
- Sleep Doctors Share the 1 Tip That’s Changed Their Lives
- Column: Let’s Bring Back Romance
- What It’s Like to Have Long COVID As a Kid
- FX’s Say Nothing Is the Must-Watch Political Thriller of 2024
- Merle Bombardieri Is Helping People Make the Baby Decision
Write to Nik Popli at nik.popli@time.com