Former OpenAI Chief Scientist Announces New Safety-Focused Company

3 minute read

Ilya Sutskever, a co-founder and former chief scientist of OpenAI, announced on Wednesday that he’s launching a new venture dubbed Safe Superintelligence Inc. Sutskever said on X that the new lab will focus solely on building a safe “superintelligence”—an industry term for a hypothetical system that’s smarter than humans.

Sutskever is joined at Safe SuperIntelligence Inc. by co-founders Daniel Gross, an investor and engineer who worked on AI at Apple till 2017, and Daniel Levy, another former OpenAI employee. The new American-based firm will have offices in Palo Alto, Calif., and Tel Aviv, according to a description Sutskever shared.

Sutskever was one of OpenAI’s founding members, and was chief scientist during the company’s meteoric rise following the release of ChatGPT. In November, Sutskever took part in the infamous attempt to oust OpenAI CEO Sam Altman, only to later change his mind and support Altman’s return. When Sutskever announced his resignation in May, he said he was “confident that OpenAI will build AGI that is both safe and beneficial” under Altman’s leadership. 

Safe Superintelligence Inc. says it will only aim to release one product: the system in its name. This model will insulate the company from commercial pressures, its founders wrote. However, it’s currently unclear who will fund the new venture's development or what exactly its business model will eventually be.

“Our singular focus means no distraction by management overhead or product cycles,” the announcement reads, perhaps subtly taking aim at OpenAI. In May, another senior OpenAI member, Jan Leike, who co-led a safety team with Sutskever, accused the company of prioritizing “shiny products” over safety. Leike’s accusations came around the time that six other safety-conscious employees left the company. Altman and OpenAI’s President, Greg Brockman, responded to Leike’s accusations by acknowledging there was more work to be done, saying “we take our role here very seriously and carefully weigh feedback on our actions.”

Read more: A Timeline of All the Recent Accusations Leveled at OpenAI and Sam Altman

In an interview with Bloomberg, Sutskever elaborated on Safe Superintelligence Inc.’s approach, saying, “By safe, we mean safe like nuclear safety as opposed to safe as in ‘trust and safety’”; one of OpenAI’s core safety principles is to “be a pioneer in trust and safety.”

While many details about the new company remain to be revealed, its founders have one message for those in the industry who are intrigued: They’re hiring.

More Must-Reads from TIME

Contact us at letters@time.com