Illustration by TIME; reference image courtesy of Paul Scharre

In 2007, Paul Scharre was traveling with his U.S. Army Ranger unit in Iraq when they came across a roadside bomb. “​​I was expecting to see the bomb disposal [technician] come out in a big bomb suit, like in the movie The Hurt Locker. And instead, they rolled up a little robot to defuse the bomb. And the light bulb went on in my head.”

The light bulb was still on years later, when, after multiple tours in Iraq and Afghanistan, Scharre went to work at the Department of Defense (DoD). One of the first things he explored was using robotics to create more distance between military personnel and potential threats. Later, he led the Defense working group that drafted the directive establishing the department’s policies on autonomy in weapon systems.

Since leaving the DoD in 2013, Scharre has worked at the Center for a New American Security, a military-affairs think tank—first as a fellow, and more recently as vice president and director of studies. There, he has continued to focus on the intersection of AI and the military, often following the action in the field. His first book, Army of None: Autonomous Weapons and the Future of War, was released in 2018. It focused on weapons like drones, which were the central issue at the intersection of the AI and military at the time.

As the debate expanded and policymakers became concerned with AI competition across all domains, Scharre wrote a new book. Four Battlegrounds: Power in the Age of Artificial Intelligence explains how the U.S. can prevail over China in the battle for technological superiority, warning that “if the United States moves too slowly it could cede military dominance in a critical new technology to a rising and revisionist China.”

But, Scharre isn’t focused solely on beating China. When asked how rapid AI progress makes him feel, Scharre says, “I don’t know why more people aren’t worried.”

Scharre is concerned about powerful AI systems falling into the hands of the U.S.’s adversaries or bad actors, but also about accidents involving powerful AI systems. “I’m worried about industrial accidents. I’m worried about accidents with weaponized forms of AI, which will happen—countries are already incorporating AI into their militaries.”

This, he admits, is a minority viewpoint in the national-security community, because “the national-security establishment tends to prioritize staying ahead of competitors.”

Scharre worries that a lack of coordination, either at a national level in the form of regulation or at an international level in the form of treaties, could lead to a dangerous arms race. “It’s possible that there could be down the road some AI systems that we simply shouldn’t build. We don’t yet know. But there might be systems that are simply too dangerous to build or deploy.”

More Must-Reads from TIME

Write to Will Henshall at

Sougwen Chung
Yann LeCun
Abeba Birhane
Audrey Tang