What an American Approach to AI Regulation Should Look Like

7 minute read
Ideas
Scharre is vice president and director of studies at the Center for a New American Security and the author of Four Battlegrounds: Power in the Age of Artificial Intelligence  
Chilukuri is the senior fellow and program director for the Technology and National Security Program at the Center for a New American Security (CNAS)

As the world grapples with how to regulate artificial intelligence, Washington faces a unique dilemma: how to secure America’s position as the global AI leader, while guarding against AI’s possible risks? Although any country seeking to regulate AI must balance regulation and innovation, this task is especially hard for the United States because we have more to lose. The United Kingdom, European Union, and China all have formidable AI companies, but U.S. firms dominate the field, propelled by our uniquely open innovation ecosystem. This dominance was on display recently, which saw OpenAI release Sora, a powerful new text-to-video platform, and Google introduce Gemini 1.5, its next-generation AI model that can absorb requests more than 30 times the size of its predecessor.

If these trends continue, and AI proves the game-changer that many expect—surrendering U.S. leadership is not an option. But as the recent Senate hearing with social media executives reminds us, neither is leaving another powerful technology completely unregulated.

So far, the EU and China have raced ahead on AI regulation, but they have different objectives in mind. The EU’s recent AI Act prioritized minimizing social harms—like AI-powered discrimination in hiring—through a comprehensive, “risk-based” approach. China’s AI regulations, unsurprisingly, focused on reasserting state control over information. Neither approach will favor AI innovation (as some EU member states have already groused). Washington’s challenge is to develop a uniquely American approach to AI regulation that secures our leadership and protects our people—and the world—from the technology’s potential dangers.

Although the Biden Administration’s AI executive order was a valuable first step, there are limits to what the executive branch can do on its own. Only Congress can provide America with an enduring legal framework to govern this transformative technology. As lawmakers weigh their options, they must balance an array of competing priorities: the need to ensure an open and competitive AI ecosystem, manage safety risks, control the proliferation of potentially harmful AI systems, and stay ahead of China. To accomplish these goals, the United States will need a flexible and adaptive regulatory framework to keep pace with a rapidly evolving technology.

Members of Congress have shown a bipartisan interest to act – from Senator Chuck Schumer’s AI Insight Forums to Speaker Mike Johnson’s new Task Force on AI. The enthusiasm is welcome and warranted. AI may pose the most complex and urgent regulatory challenge Washington has ever faced. As Washington grapples with AI regulation, here are four lessons to bear in mind:

First, AI will always move faster than Congress. After the advent of automobiles, it took decades for cars to become widespread in American families. For smartphones and social media, it took years. ChatGPT amassed over 100 million users in two months. In just a few years, generative AI has gone from creating human-like text responses to producing realistic images and videos on demand and credibly mimicking a human voice with just three seconds of original audio. The unforgiving pace of AI development will always run circles around legislative processes. Should Congress pass AI regulations, we should not expect it to revisit those rules soon. The last time Congress passed major technology regulations was 1996, when most Americans still had dial-up internet. Although technology has transformed many times over in the years since, the law has not. This is not to suggest Congress should abandon regulation. Rather, lawmakers should recognize that whatever laws they pass must have the foresight and flexibility to endure as AI evolves. This could argue for a principles-based approach to regulation, instead of fixed technical standards that could be outdated before the ink dries. It could also raise the appeal of an independent agency empowered to better target and adapt regulations over time, much like the expert agencies overseeing the pharmaceutical, aviation, automotive, food, agricultural, telecommunications, financial, and other sectors.

Second, safety sustains innovation. There will always be a tension between promoting innovation and safety, but they complement each other more than the current debate suggests. Cryptocurrency offers a cautionary tale. The virtually unregulated sector resulted, predictably, in the spectacular implosion of FTX. The fiasco left the public and policymakers alike with a dismal impression of the sector, for better or worse, likely stunting the technology’s adoption. It is not hard to imagine unregulated AI applications producing an equally high-profile failure that could hinder adoption or lead to a regulatory overcorrection from Washington. To run far, AI must run safely.

More From TIME

Third, AI regulation needs to incentivize broad and open competition. The growth of large, expensive foundation models has privileged big companies, which are training the most capable models at the frontier of AI development. In a surprising change from the anti-regulation stance of most tech companies over the last twenty years, some leading AI companies are openly calling for government regulation of the most advanced AI systems. These calls have been met with understandable skepticism, with some arguing Big Tech companies want regulations, not out of virtue, but to erect regulatory barriers against competition. But leaving powerful AI models unregulated is no solution. And the experience with social media has shown that unregulated Big Tech is hardly a recipe for healthy competition and social good. We should reject a choice between allowing the most powerful AI companies to self-regulate and a regulatory burden that stifles innovation and competition. This will require rules that are clear, consistent, and without massive compliance costs—admittedly, no easy feat. Congress should also seize opportunities to level the playing field, such as funding the National AI Research Resource to provide data and computing resources to academics and start-ups.

Fourth, America’s AI policy needs a global vision. As China, the E.U., the U.K., and others develop their own competing frameworks, global AI governance is up for grabs, and America cannot afford to sit on the sidelines. Even as the United States competes with China, it must look for opportunities for cooperation. No single nation can address the potential risks of AI-enabled bio and cyber threats on its own, any more than it can fight climate change and pandemics alone. As the world’s two leading AI powers, the United States and China must work together to enhance safety, limit proliferation, and draw red lines against dangerous AI uses. The agreement between President Joe Biden and General Secretary Xi Jinping to begin discussions on AI risk and safety is a promising sign. But a narrow fixation on China risks blowing America’s opportunity—and responsibility—to offer the world a compelling AI model that harnesses its benefits without trading away core democratic values like privacy and civil rights. China understands the appeal to the Global South of affirmative AI applications in areas such as energy and agriculture; it’s time for America to get in the game.

As Washington debates AI, its competitors have offered their own solutions for how to balance safety, innovation, and competition for this powerful technology. America needs its own answer, consistent with democratic values and interests—and needs it soon. At the Center for a New American Security, we recently launched an AI Governance Forum to convene experts from industry, academia, and civil society to tackle these challenges and propose actionable steps for policymakers. Our goal is to bring these communities together to develop solutions that balance competing interests and offer a framework for a uniquely American model for AI governance. U.S. leadership helped birth the age of AI; now it must help the world harness this technology safely while upholding its commitment to democracy, privacy, and human freedom.

More Must-Reads from TIME

Contact us at letters@time.com

TIME Ideas hosts the world's leading voices, providing commentary on events in news, society, and culture. We welcome outside contributions. Opinions expressed do not necessarily reflect the views of TIME editors.