At a Senate hearing on Tuesday, the CEO of OpenAI Sam Altman received a warm welcome from lawmakers, many of whom expressed surprise at his main argument: that AI should be regulated, and fast.
It was a far cry from the grueling ordeals that tech CEOs have previously faced on Capitol Hill. Mark Zuckerberg, Jack Dorsey and Shou Zi Chew have all endured antagonistic Senate hearings in recent years about the wide-ranging impacts of their platforms—Facebook, Twitter and TikTok, respectively—on American democracy and the lives of their users.
“I think what’s happening today in this hearing room is historic,” said Senator Dick Durbin (D., Ill.) during the Senate judiciary subcommittee hearing about oversight of AI. “I can’t recall when we’ve had people representing large corporations or private sector entities come before us and plead with us to regulate them.”
But in calling for legal guardrails to govern the tech his company is building, Altman is not unlike the other Silicon Valley leaders who have testified before Congress in the past. Tech CEOs like Zuckerberg have often used their appearances in Washington to plead with lawmakers for regulation. “We don’t think that tech companies should be making so many decisions about these important issues alone,” Zuckerberg testified in front of Congress in 2020. “I believe we need a more active role for governments and regulators,” he said, before outlining a list of policy suggestions.
Altman’s pitch to lawmakers on Tuesday was not so different. He suggested a suite of regulations that could include “licensing and testing requirements for the development and release of AI models above a threshold of capabilities,” and agreed with calls for both U.S. and international agencies to govern AI.
What was different this time was the receptiveness of the audience. “One of the things that struck me about the Senate is that they were all willing to admit that they didn’t really get social media [regulation] right, and were trying to figure out how to handle AI better,” Gary Marcus, a professor at New York University, who testified alongside Altman on Tuesday, told TIME after it concluded.
One senator appeared to be so taken by Altman’s suggestion that the U.S. government create a regulatory agency to govern AI that he suggested the OpenAI CEO could control it. “Would you be qualified to, if we promulgated those rules, to administer those rules?” said Senator John Kennedy, Republican of Louisiana. After Altman said he loved his current job, Kennedy proceeded to ask Altman for suggestions about who else could run such an agency.
Altman did not suggest any names for possible regulators during the hearing. But Kennedy’s attitude perhaps indicated that senators, keen not to leave a transformational new technology almost entirely unregulated like they did during the era of social media, are perhaps over-correcting by being too credulous toward technologists’ own views of how their tools should be regulated. “We can’t really have the companies recommending the regulators,” Marcus, the AI professor, told TIME after the hearing. “What you don’t want is regulatory capture, where the government just plays into the hands of the companies.”
While senators did ask some tough questions of Altman, including about whether his company should be allowed to continue using copyrighted work to train its AIs, the hearing had more the feel of an introductory seminar on OpenAI’s policies and Altman’s views on the best ways to regulate AI.
The recent experience of European Union regulators should also provide a lesson for U.S. lawmakers about the risks of hewing too closely to what the tech companies describe as optimal AI regulation. In Brussels, where legislation governing AI is fast progressing toward becoming law, large AI companies including Google and Microsoft—OpenAI’s principal funder—have lobbied hard against the most powerful AI tools being subject to the draft law’s strictest provisions for “high risk” systems. (That’s even as, in public, Google and Microsoft profess to welcome AI regulation.) E.U. lawmakers appear to have ignored much of that lobbying, with the latest draft of the bill containing limits on powerful so-called “foundation” AI models.
Still, a cordial relationship between companies and lawmakers isn’t by itself a cause for concern. Past testimony from Zuckerberg, Dorsey and Chew on Capitol Hill often resembled a game of political point scoring, with lawmakers seemingly lining up to record sound bites taking potshots at CEOs, rather than an opportunity for policy discussion or genuine scrutiny. “I don’t think there’s any reason why governments and companies have to be adversarial,” Marcus says. “But it has to be at arm’s length.”
As AI creeps further into our lives, the tone of future hearings is yet to be seen. Zuckerberg’s first appearance before Congress came in 2018, when Facebook was more than a decade old, and after it had been compromised by Russian intelligence agencies, after a series of high-profile data leaks, and after misinformation became an integral part of U.S. politics.
ChatGPT, by contrast, has been around for less than six months.
More Must-Reads from TIME
- Donald Trump Is TIME's 2024 Person of the Year
- Why We Chose Trump as Person of the Year
- Is Intermittent Fasting Good or Bad for You?
- The 100 Must-Read Books of 2024
- The 20 Best Christmas TV Episodes
- Column: If Optimism Feels Ridiculous Now, Try Hope
- The Future of Climate Action Is Trade Policy
- Merle Bombardieri Is Helping People Make the Baby Decision
Write to Billy Perrigo at billy.perrigo@time.com