In early 2023, following an international conference that included dialogue with China, the United States released a “Political Declaration on Responsible Military Use of Artificial Intelligence and Autonomy,” urging states to adopt sensible policies that include ensuring ultimate human control over nuclear weapons. Yet the notion of “human control” itself is hazier than it might seem. If humans authorized a future AI system to “stop an incoming nuclear attack,” how much discretion should it have over how to do so? The challenge is that an AI general enough to successfully thwart such an attack could also be used for offensive purposes.
We need to recognize the fact that AI technologies are inherently dual-use. This is true even of systems already deployed. For instance, the very same drone that delivers medication to a hospital that is inaccessible by road during a rainy season could later carry an explosive to that same hospital. Keep in mind that military operations have for more than a decade been using drones so precise that they can send a missile through a particular window that is literally on the other side of the earth from its operators.
We also have to think through whether we would really want our side to observe a lethal autonomous weapons (LAW) ban if hostile military forces are not doing so. What if an enemy nation sent an AI-controlled contingent of advanced war machines to threaten your security? Wouldn’t you want your side to have an even more intelligent capability to defeat them and keep you safe? This is the primary reason that the “Campaign to Stop Killer Robots” has failed to gain major traction. As of 2024, all major military powers have declined to endorse the campaign, with the notable exception of China, which did so in 2018 but later clarified that it supported a ban on only use, not development—although even this is likely more for strategic and political reasons than moral ones, as autonomous weapons used by the United States and its allies could disadvantage Beijing militarily.
Further, what will “human” even ultimately mean in the context of control when, starting in the 2030s, we introduce a nonbiological addition to our own decision-making with brain–computer interfaces? That nonbiological component will only grow exponentially, while our biological intelligence will stay the same. And as we get to the late 2030s, our thinking itself will be largely nonbiological. Where will the human decision-making be when our own thoughts largely use nonbiological systems?
Instead of pinning our hopes on the unstable distinction between humans and AI, we should focus on how to make the AI systems safe and aligned with humanity’s wellbeing. In 2017, I attended the Asilomar Conference on Beneficial AI—a conference inspired by the successful biotechnology safety guidelines established at the 1975 Asilomar Conference on Recombinant DNA—to discuss how the world could safely use artificial intelligence. What resulted from the talks are the Asilomar AI Principles, some of which have already been very influential with AI labs and governments. For example, principle 7 (Failure Transparency: “If an AI system causes harm, it should be possible to ascertain why”) and principle 8 (Judicial Transparency: “Any involvement by an autonomous system in judicial decision-making should provide a satisfactory explanation auditable by a competent human authority”) are closely reflected in both the voluntary commitments from leading tech giant in July 2023, and in President Biden’s executive order several months later.
Efforts to render AI decisions more comprehensible are important, but the basic problem is that, regardless of any explanation they provide, we simply won’t have the capacity to fully understand most of the decisions made by future superintelligent AI. If a Go-playing program, for instance, far beyond the best human were able to explain its strategic decisions, not even the best player in the world (without the assistance of a cybernetic enhancement) would entirely grasp them. One promising line of research aimed at reducing risks from opaque AI systems is “eliciting latent knowledge.” This project is trying to develop techniques that can ensure that if we ask an AI a question, it gives us all the relevant information it knows, instead of just telling us what it thinks we want to hear—which will be a growing risk as machine-learning systems become more powerful.
The Asilomar principles also laudably promote noncompetitive dynamics around AI development, notably principle 18 (AI Arms Race: “An arms race in lethal autonomous weapons should be avoided”) and principle 23 (Common Good: “Superintelligence should only be developed in the service of widely shared ethical ideals, and for the benefit of all humanity rather than one state or organization.”). Yet, because superintelligent AI could be a decisive advantage in warfare and bring tremendous economic benefits, military powers will have strong incentives to engage in an arms race for it. Not only does this worsen risks of misuse, but it also increases the chances that safety precautions around AI alignment could be neglected.
Read more: Don’t Fear Artificial Intelligence
It is very difficult to usefully restrict development of any fundamental AI capability, especially since the basic idea behind general intelligence is so broad. Yet there are encouraging signs that major governments are now taking the challenge seriously. Following the international AI Safety Summit 2023 in the UK, the Bletchley Declaration by 28 countries pledged to prioritize safe AI development. And already in 2024, the European Union passed the landmark EU AI Act regulating high-risk systems, and the United Nations adopted a historic resolution “to promote safe, secure and trustworthy artificial intelligence.” Much will depend on how such initiatives are actually implemented. Any early regulation will inevitably make mistakes. The key question is how quickly policymakers can learn and adapt.
One hopeful argument, which is based on the principle of the free market, is that each step toward superintelligence is subject to market acceptance. In other words, artificial general intelligence will be created by humans to solve real human problems, and there are strong incentives to optimize it for beneficial purposes. Since AI is emerging from a deeply integrated economic infrastructure, it will reflect our values, because in an important sense it will be us. We are already a human-machine civilization. Ultimately, the most important approach we can take to keep AI safe is to protect and improve on our human governance and social institutions. The best way to avoid destructive conflict in the future is to continue the advance of our ethical ideals, which has already profoundly reduced violence in recent centuries and decades.
AI is the pivotal technology that will allow us to meet the pressing challenges that confront us, including overcoming disease, poverty, environmental degradation, and all of our human frailties. We have a moral imperative to realize the promise of these new technologies while mitigating the peril. But it won’t be the first time we’ve succeeded in doing so.
When I was growing up, most people around me assumed that nuclear war was almost inevitable. The fact that our species found the wisdom to refrain from using these terrible weapons shines as an example of how we have it in our power to likewise use emerging biotechnology, nanotechnology, and superintelligent AI responsibly. We are not doomed to failure in controlling these perils.
Overall, we should be cautiously optimistic. While AI is creating new technical threats, it will also radically enhance our ability to deal with those threats. As for abuse, since these methods will enhance our intelligence regardless of our values, they can be used for both promise and peril. We should thus work toward a world where the powers of AI are broadly distributed, so that its effects reflect the values of humanity as a whole.
Adapted from The Singularity is Nearer: When We Merge With AI by Ray Kurzweil, published by Viking. Copyright © 2024 by Ray Kurzweil. Reprinted courtesy of Penguin Random House.
More Must-Reads from TIME
- Why Trump’s Message Worked on Latino Men
- What Trump’s Win Could Mean for Housing
- The 100 Must-Read Books of 2024
- Sleep Doctors Share the 1 Tip That’s Changed Their Lives
- Column: Let’s Bring Back Romance
- What It’s Like to Have Long COVID As a Kid
- FX’s Say Nothing Is the Must-Watch Political Thriller of 2024
- Merle Bombardieri Is Helping People Make the Baby Decision
Contact us at letters@time.com