In February, Sam Altman, CEO of OpenAI, posted a poorly lit selfie with two people—the musician Grimes and the controversial AI theorist Eliezer Yudkowsky. Grimes has been in a relationship with Elon Musk, but it’s Altman and Yudkowsky who make the unlikely pair. Altman’s company has built one of the most powerful and generally capable AI systems in existence—GPT-4. Yudkowsky has spent more than two decades warning that powerful AI systems could, and likely will, kill all of humanity.
A decision theorist who did not attend high school or college, Yudkowsky is one of the founders of the field of AI alignment, which aims to prevent Terminator-like scenarios by making sure that AI systems do what their creators want them to do. He founded the Singularity Institute for Artificial Intelligence, later renamed the Machine Intelligence Research Institute (MIRI), in 2000, and has written hundreds of articles outlining the dangers of AI on LessWrong, the community blog he founded in 2009.
Last year, however, Yudkowsky admitted defeat. On April 1, 2022, he announced that MIRI was changing its mission to “Death With Dignity,” estimating the chances of human survival to be 0%. This was no April Fools’ joke: MIRI doesn’t do 401(k) matching, because, a spokesperson explains, they believe that AI will be “so disruptive to humanity’s future—for worse or for better—that the notion of traditional retirement planning is moot.”
Since he didn’t think we’d be able to make AI systems safely, Yudkowsky decided to spend his time trying to tell people not to make them at all. He began by appearing in February 2023 on Bankless, a crypto podcast. Yudkowsky was surprised by how open the interviewers were to listening to what he had to say, telling TIME that “I expected to be a tiny voice shouting into the void, and people listened instead. So I doubled down on that.”
Since then, Yudkowsky has been on something of a media blitz, appearing on many more podcasts, including Hold These Truths, hosted by U.S. Representative Dan Crenshaw, a Texas Republican, and giving a TED talk. While he’s slightly more optimistic that concerns over AI are going mainstream, he still believes there’s a 99% chance that AI wipes out all of humanity. In a March op-ed for TIME—which was referenced at a White House press briefing—he proposed a variety of actions to limit powerful AI, including a total halt to development and multinational agreements that would, in the case of violations, allow governments to “destroy a rogue datacenter by airstrike.”
- Why Trump’s Message Worked on Latino Men
- What Trump’s Win Could Mean for Housing
- The 100 Must-Read Books of 2024
- Sleep Doctors Share the 1 Tip That’s Changed Their Lives
- Column: Let’s Bring Back Romance
- What It’s Like to Have Long COVID As a Kid
- FX’s Say Nothing Is the Must-Watch Political Thriller of 2024
- Merle Bombardieri Is Helping People Make the Baby Decision