Greg Brockman, co-founder and president of OpenAI, works 60 to 100 hours per week, and spends around 80% of the time coding. Former colleagues have described him as the hardest-working person at OpenAI.
Brockman fits the profile of the “10x engineer,” Silicon Valley–speak for someone who does the work of 10 normal coders. He was a science prodigy who went to Harvard and transferred to MIT, before dropping out to join Stripe, a financial-technology startup. He was chief technology officer there for five years before leaving to found OpenAI in 2015.
Brockman, now 34, has grander ambitions. If you’re part of a 10-person team, “even if you are actually the mythical 10x, you’re still only going to double the output of that team,” he explains. “What you want to do is 10x the output of the company.” And so Brockman spends much of his time “sniffing around” for things he could do—blockages to address, projects to launch—that would dramatically accelerate OpenAI’s performance.
With the other 20% of his time, Brockman thinks about the big questions facing OpenAI. One such question is OpenAI’s approach to AI safety, disagreements about which reportedly led to the 2021 split in which a number of senior employees left to found Anthropic, now one of OpenAI’s main competitors. Asked about Anthropic, Brockman holds firm. “I’ll observe that Anthropic and us are pursuing very similar strategies,” Brockman says. “So I guess it tells you something.”
Part of OpenAI’s safety strategy is deciding whether and how OpenAI should make the AI models it develops accessible to customers. OpenAI has previously been criticized for decisions to deploy its AI models despite potential harms. But Brockman, a startup engineer to his core, argues the only way to ensure safety is to continue to deploy more powerful models as they are developed and learn from each deployment, addressing issues as they arise.
“I think the most important decision we’ve made in OpenAI’s history was the decision to do iterative deployment,” he says. “Imagine you actually had a very powerful AI, you actually built an AGI [artificial general intelligence, a system that can match human performance on all cognitive tasks], and it’s your first time ever deploying. Are you gonna get that right?”
- Where Trump 2.0 Will Differ From 1.0
- How Elon Musk Became a Kingmaker
- The Power—And Limits—of Peer Support
- The 100 Must-Read Books of 2024
- Column: If Optimism Feels Ridiculous Now, Try Hope
- The Future of Climate Action Is Trade Policy
- FX’s Say Nothing Is the Must-Watch Political Thriller of 2024
- Merle Bombardieri Is Helping People Make the Baby Decision