Daniel Kokotajlo

Former Governance Researcher, OpenAI 

2 minute read

In 2022, Daniel Kokotajlo joined OpenAI as a governance researcher. Two years later, he made headlines by resigning and refusing to sign a non-disparagement clause that would have cost him approximately $2 million in equity. His choice to prioritize his “freedom to speak up” over financial gain has fuelled a wider conversation about AI safety and corporate responsibility.

Along with other former and current employees of OpenAI and Google DeepMind, Kokotajlo signed a letter arguing that all AI labs have strong financial incentives to avoid oversight. It called for whistleblower-style protections that would give industry researchers the “right to warn” about the dangers of AI without fear of reprisal while protecting their anonymity. It also advocated for a “culture of open criticism” within companies as they race to develop artificial general intelligence (AGI), hypothetical AI systems that Kokotajlo describes as “generally smarter than humans.” 

While bound by confidentiality agreements, Kokotajlo believes current AI systems are rapidly approaching AGI-level capabilities, and is deeply worried about both the potentially catastrophic risks these systems would pose, and the fact that they would concentrate power in the hands of whoever controls them, he tells TIME.

"A sane civilization would not be proceeding with the creation of this incredibly powerful technology until we had some better idea of what we were doing and how we were going to keep it safe," he says.

*Disclosure: OpenAI and TIME have a licensing and technology agreement that allows OpenAI to access TIME's archives.

More Must-Reads from TIME

Contact us at letters@time.com