Illustration by TIME; reference image courtesy of Max Tegmark

In March, the Future of Life Institute published an open letter warning of the “profound risks to society and humanity” posed by recent advances in AI, and urged AI labs to pause the training of systems more powerful than GPT-4. That letter, signed by more than 30,000 people, including prominent figures like institute adviser Elon Musk, Apple co-founder Steve Wozniak, and writer Yuval Noah Harari, helped to bring the issue of AI safety into the public consciousness.

Max Tegmark, a Swedish American physicist and co-founder of the Cambridge, Mass.–based institute, spends a lot of time warning about what the organization calls “extreme large-scale risks,” such as the impact of nuclear war and of ignoring the threat of artificial general intelligence or superintelligent AI. The institute, which in January had a brush with controversy, recently made a short fictional film, Artificial Escalation, exploring the disastrous implications of the U.S. and China using AI in their nuclear command systems.


More from TIME


Tegmark, who focuses on the intersection between AI, physics, and neuroscience, is a physics professor at the Massachusetts Institute of Technology, but also studied economics in his native Sweden. Answering questions by email, he elaborated on Artificial Escalation and shared his thoughts on the need for AI regulation and the ethics of the current AI business model.

TIME: The Future of Life Institute recently released a short fictional film exploring a scenario where the U.S. and China incorporate AI into their respective nuclear command, control, and communications system—with catastrophic outcomes. Can you tell us more about why you made this film and why now?

Max Tegmark: We made the film to illustrate the potential for AI to fuel escalation faster than humans can independently assess, signal intentions, and de-escalate—greatly increasing the risk of nuclear war. Global leaders are currently considering the different ways in which AI could present a threat to international peace and security, and this is a major one.

What do you make of the argument that we could put enough guardrails in place in the military so that AI doesn’t make decisions about who lives and dies?

Firstly, the U.S. government currently opposes an international treaty that would force our adversaries to adopt such guardrails. Secondly, said guardrails do nothing to address cyber vulnerabilities, epistemic uncertainty, or other critical risk factors. Finally, as we saw in the film, nominal human involvement in decision making doesn’t mean that we keep meaningful control.

What can be done by governments to regulate AI?

Our best course of action is to follow biotech’s example, and ensure that potentially dangerous products need to be approved by AI-experts at an AI [version of the] FDA before they can be launched. More than 60% of Americans support such an approach.

What’s your biggest concern about the adoption of AI going forward?

That humanity gets driven extinct in the coming decades. The CEOs leading the race toward superhuman AI recently signed a statement warning that the very products they’re trying to build could drive humanity extinct. They don’t understand how they work, and they have no satisfactory solution to make them safe.

What is the one thing you wish more people understood about AI?

The pace of change. AI development is progressing so fast that many experts expect it to outsmart humans at most tasks within a few years. People need to understand that if we don’t act now, it will be too late.

What’s the key ethical or philosophical question that we still need to grapple with?

How is it ethical for a small number of unelected tech leaders to take such great risks on behalf of the rest of humanity?

How is the current moment in AI different to what you would have predicted five years ago?

Power has grown even faster than I expected, and regulation has progressed even slower than I expected.

What’s the science-fiction book or film that you think best represents the most likely AI future?

Adam McKay’s Don’t Look Up, is an excellent representation of where we are right now and where we’re headed, if the asteroid represents superintelligence.

Correction, Sept 8

The original version of this story misstated the number of signatories of an open letter. The letter was signed by more than 30,000 people, not 3,000. It also mischaracterized the letter’s demands. The letter urged AI labs to pause the training of systems more powerful than GPT-4, not to pause all development.

More Must-Reads from TIME

Contact us at letters@time.com.

Sougwen Chung
Yann LeCun
Abeba Birhane
Audrey Tang
Grimes
EDIT POST