The top U.S. artificial intelligence companies will participate in a government-led effort intended to craft federal standards on the technology to ensure that it’s deployed safely and responsibly, the Commerce Department said Thursday.
OpenAI, Anthropic, Microsoft Corp., Meta Platforms Inc. and Alphabet Inc.’s Google are among more than 200 members of a newly established AI Safety Institute Consortium under the department, Commerce Secretary Gina Raimondo said. Also on the list are Apple Inc., Amazon Inc., Hugging Face Inc. and IBM.
The top industry players will work with the National Institute of Standards and Technology, a body within Commerce, along with other technology companies, civil society groups, academics, and state and local government officials to establish safety standards regarding AI.
“President Biden directed us to pull every lever to accomplish two key goals: set safety standards and protect our innovation ecosystem,” Raimondo said in a statement.
Major tech companies have been engaging with the Biden administration and policymakers in Washington on regulating AI as the technology rapidly advances and is poised to disrupt industries. Federal officials are seeking to maintain U.S. leadership on AI development, intending to set rules that protect Americans from hazards, such as misinformation and privacy violations, but still promote the technology’s potential to spur progress in health care, education, and other industries.
“Progress and responsibility have to go hand in hand. Working together across industry, government and civil society is essential if we are to develop common standards around safe and trustworthy AI,” Nick Clegg, president of global affairs at Meta, said in a statement. “We’re enthusiastic about being part of this consortium and working closely with the AI Safety Institute.”
Thursday’s initiative comes as part of President Joe Biden’s sweeping executive order signed last fall that charged the Commerce Department with facilitating the development of safety, security, and testing standards for AI models as well as rules for watermarking AI-generated content.
Prominent industry startups, including Scale AI, which provides training data for generative AI models, and Altana AI, which maps global supply chains using AI, will also take part in establishing the safety standards.
“In doing so, we not only contribute to the responsible use of AI, but also reinforce the United States’ position as the global leader in the realm of artificial intelligence,” John Brennan, Scale AI’s public sector general manager, said in a statement.
More Must-Reads From TIME
- Inside the White House Program to Share America's Secrets
- Meet the 2024 Women of the Year
- East Palestine, One Year After Train Derailment
- The Closers: 18 People Working to End the Racial Wealth Gap
- Long COVID Doesn’t Always Look Like You Think It Does
- Column: The New Antisemitism
- The 13 Best New Books to Read in March
- Want Weekly Recs on What to Watch, Read, and More? Sign Up for Worth Your Time
Contact us at firstname.lastname@example.org