Ideas
September 15, 2015 11:05 AM EDT

During the first golden age of Artificial Intelligence in the late ‘80s, computers were playing decent chess. Silicon Valley figured that if computers could evaluate chess moves—a complicated mental task—they should be able to diagnose diseases, find oil, and underwrite loans. Startups raced to build “expert systems” in which computers mimicked the decision-making prowess of a doctor or petroleum engineer or loan officer by capturing their expertise as a set of rules.

Sadly, expert systems didn’t work: systems such as CADUCEUS and PROSPECTOR and MYCIN turned out to work only in very narrow, specific domains. It was too hard to capture the nuance of human decision-making as a set of rules, no matter how long or sophisticated the rules.

Flash forward to the second gold age of AI research today. The approach is fundamentally different. Rather than trying to capture expert behavior as a set of rules, researchers are having computers teach themselves using lots of examples and a set of so-called “deep learning” algorithms. Show a computer enough YouTube videos, and the computer will teach itself to recognize cats. Show a computer enough games of Mario Brothers, and the computer will figure out how to ace any level like a boss.

And this is what scares people. If we’re not explicitly programming rules of “catness” into a computer, then the computer is inferring them itself. What else will a computer be able to learn? How quickly? How completely? This fear has a led a parade of industry luminaries including Elon Musk, Steve Wozniak, and Steve Hawking to urge a ban on AI being used for offensive autonomous weapons.

While the fears are well-founded, I think they are too early. As Stanford professor Andrew Ng puts it, we’ll eventually worry about overpopulation on Mars—but we first have to set foot on the planet. There are years, perhaps decades, of useful products to build where computers are assisting human experts to make better, faster, more informed decisions. How about we start with an auto-correct system that works, rather than inadvertently generating jokes for damnyouautocorrect.com?

Chen is a partner at Andreessen Horowitz

Contact us at editors@time.com.

Read More From TIME

Related Stories

EDIT POST