• Ideas

How We Can Overcome the Risks of AI

5 minute read
Ideas
Andrew Lohn is an associate engineer, Andrew Parasiliti is director of the Center for Global Risk & Security, and William Welser IV is director of Engineering & Applied Sciences at the nonprofit, nonpartisan RAND Corporation.

Apple’s recent acquisition of Vocal IQ, an artificial intelligence company that specializes in voice programs, should not on its face lead to much fanfare: It appears to be a smart business move to enhance Siri’s capabilities. But it is also another sign of the increased role of AI in our daily lives. While the warnings and promises of AI aren’t new, advances in technology make them more pressing.

Forbes reported this month: “The vision of talking to your computer like in Star Trek and it fully understanding and executing those commands are about to become reality in the next 5 years.” Antoine Blondeau, CEO at Sentient Technologies Holdings, recently told Wired that in five years he expects “massive gains” for human efficiency as a result of artificial intelligence, especially in the fields of health care, finance, logistics and retail.

Blondeau further envisions the rise of “evolutionary intelligence agents,” that is, computers which “evolve by themselves – trained to survive and thrive by writing their own code—spawning trillions of computer programs to solve incredibly complex problems.”

While Silicon Valley enthusiasts hail the potential gains from artificial intelligence for human efficiency and the social good, Hollywood has hyped its threats. AI-based enemies have been box office draws at least since HAL cut Frank Poole’s oxygen hose in 2001: A Space Odyssey. And 2015 has truly been the year of fictional AI provocateurs and villains with blockbuster movies including Terminator Genisys, Ex-Machina, and The Avengers: Age of Ultron.

But are the risks of AI the domain of libertarians and moviemakers, or are there red flags to be seen in the specter of “intelligence agents?” Silicon Valley cannot have “exponential” technological growth and expect only positive outcomes. Similarly, Luddites can’t wish away the age of AI, even if it might not be the version we see in the movies.

The pace of AI’s development requires an overdue conversation between technology and policy leaders about the ethics, legalities and real life disruptions of handing over our most routine tasks to what we used to just call “machines.” But this conversation needs to focus increasingly on near-term risks, not just cinematic ones.

For example, even if a supercomputer’s coding is flawless, and someday self-generated, and is protected from being infected by a warring nation-state, a hacktavist, or even an angry teenager, AI can still produce wrong answers. A Wired article from January 2015 showed just how wrong. When presented with an image of alternating yellow and black parallel, horizontal lines, state of the art AI saw a school bus and was 99% sure it was right.

How far can we trust AI with such control over the Internet of Things, including our health, financial, and national defense decisions? There is a service to be done in developing a deeper understanding of the reasonable precautions needed to mitigate against coding flaws, attackers, infections and mistakes while enumerating the risks and their likelihoods.

Applied to military systems the risks are obvious, but commercial products designed by AI, could produce a wide range of unexpected negative outcomes. One example might be designing fertilizers that help reduce atmospheric carbon. The Environmental Protection Agency tests such products before they are approved so dangerous ones can be discovered before they are released. But if AI only designs products that will pass the tests, is that AI designing inherently safe products or simply ones capable of bypassing the safeguards?

One way to start addressing this question is to build AI and observe its behavior in simplified settings where humans are still smarter. RAND, where we work, produced a simulation of the fertilizer scenario that projected global temperatures and populations 75 years into the future. When the AI was given only three chemicals to learn from, the EPA was able to partially limit the dangers. But once the AI was provided delayed-release agents common in fertilizer manufacturing, it completely bypassed the protections and started reducing the number of carbon producers in the environment.

The same types of issues could exist for all manner of potentially dangerous products, like those regulated by the Food and Drug Administration, the National Highway Traffic Safety Administration, the Bureau of Alcohol, Tobacco, Firearms and Explosives and countless other regulatory agencies. And that doesn’t even consider the threats that could be posed by AI-designed products made abroad.

Can the risks posed by AI be completely eliminated? The short answer is no, but they are manageable, and need not be cause for alarm. The best shot at providing adequate safeguards would be regulating the AI itself: requiring the development of testing protocols for the design of AI algorithms, improved cybersecurity protections, and input validation standards—at the very least. Those protections would need to be specifically tailored to each industry or individual application, requiring countless AI experts who understand the technologies, the regulatory environment, and the specific industry or application. At the same time, regulatory proposals should be crafted to avoid stifling development and innovation.

AI needs to enter the public and political discourse with real-world discussion between tech gurus and policymakers about the applications, implications and ethics of artificial intelligence. Specialized AI for product design may be possible today, but answering broad questions such as, “Will this action be harmful?” is well outside the capabilities of AI systems, and probably their designers as well.

Answering such questions might seem like an impossible challenge, but there are signs of hope. First, the risks with AI, as with most technologies, can be managed. But the discussions have to start. And second, unlike in an AI-themed Hollywood thriller, these machines are built to work with humankind, not against it. It will take an army of human AI experts to keep it that way, but precautions can and should be sought now.

More Must-Reads From TIME

Contact us at letters@time.com

TIME Ideas hosts the world's leading voices, providing commentary on events in news, society, and culture. We welcome outside contributions. Opinions expressed do not necessarily reflect the views of TIME editors.