How to Make Driverless Cars Behave

9 minute read

Imagine you’re winding through the Pacific Coast Highway in one of Google’s self-driving cars, with the ocean on your right and the hills of Malibu across the opposite lane to your left. Just as you’re turning one of the road’s blind corners, another car whips around the bend in the opposite direction. Its brakes have failed, and it’s headed for your lane.

With little room to maneuver and no time for human intervention, your robot car faces a decision. It could turn inward and slam the brakes to avoid a head-on collision, but this would potentially let the other car sail over the cliff wall. Alternatively, your car could brace for impact, keeping both cars on the road but potentially injuring you, along with the other car’s passengers.

In a crash situation, we don’t have time to think about morality, and studies show we act more on instinct. But for a computer, a fraction of a second is plenty of time to ponder an ethical decision–provided it’s been programmed to think that way.

The problem is that the answers aren’t always clear-cut. Should a driverless car jeopardize its passenger’s safety to save someone else’s life? Does the action change if the other vehicle is causing the crash? What if there are more passengers in the other car? Less morbidly, should a Google-powered car be able to divert your route to drive past an advertiser’s business? Should the driver be able to influence these hypothetical decisions before getting into the vehicle?

As driverless cars get closer to hitting the road, moral dilemmas are something the auto industry will need to consider. And while it’s still early days for the technology, a conversation about ethics is starting to happen.

The Daimler and Benz foundation, for instance, is funding a research project about how driverless cars will change society. Part of that project, led by California Polytechnic State University professor Patrick Lin, will be focused on ethics. Lin has arguably thought about the ethics of driverless cars more than anyone. He’s written about the topic for Wired and Forbes, and is currently on sabbatical working with the Center for Automotive Research at Stanford (CARS, of course), a group that partners with auto industry members on future technology.

Over the last year, Lin has been convincing the auto industry that it should be thinking about ethics, including briefings with Tesla Motors and auto supplier Bosch, and talks at Stanford with major industry players.

“I’ve been telling them that, at this very early stage, what’s important isn’t so much nailing down the right answers to difficult ethical dilemmas, but to raise awareness that ethics will matter much more as cars become more autonomous,” Lin wrote in an e-mail. “It’s about being thoughtful about certain decisions and able to defend them–in other words, it’s about showing your math.”

In a phone interview, Lin said that industry representatives often react to his talks with astonishment, as they realize driverless cars require ethical considerations.

Still, he said that it’s early days for driverless cars–we’re a long way from having computers that can read a situation like a human can–so manufacturers don’t yet have to worry too deeply about solving ethical scenarios. “We’re not quite there yet where we can collect a lot of the information that goes into some of these ethical dilemmas,” Lin said.

Perhaps that explains why auto makers aren’t eager to have the discussion in public at the moment. BMW, Ford and Audi–who are each working on automated driving features in their cars–declined to comment for this story. Google also wouldn’t comment on the record, even as it prepares to test fully autonomous cars with no steering wheels. And the auto makers who did comment are focused on the idea that the first driverless cars won’t take ethics into account at all.

“The cars are designed to minimize the overall risk for a traffic accident,” Volvo spokeswoman Malin Persson said in an e-mail. “If the situation is unsure, the car is made to come to a safe stop.” (Volvo, by the way, says it wants to eliminate serious injuries or deaths in its cars by 2020, but research has shown that even driverless cars will inevitably crash.)

John Capp, GM’s director of electrical and control systems research, said in an interview that getting to the point where a driverless car needs to account for ethics will be a gradual, step-by-step process. (GM plans to offer a “Super Cruise” feature for highway driving by 2018, but it’ll still require the driver to stay alert and take the wheel in emergencies.)

“It’s going to be a while before cars have the capability to completely replace a driver, and we have no intention of promising that to a driver until the technology is capable of doing it,” Capp said.

Capp has a point, in that even Google’s recently-announced city-dwelling cars have only a rudimentary ability to detect what’s around them. They can’t, for instance, distinguish an elderly person from a child, or figure out how many people are in another vehicle. But as Lin points out, it’s likely that these cars will gain more advanced sensors over time, improving their ability to make decisions on the driver’s behalf.

Once that happens, the actual programming should be easy. Stanford’s CARS group, for instance, has already developed tools so that auto makers can code morality into their cars. It’s one part of a larger framework that CARS is offering to auto makers, covering all aspects of driverless car software.

“These are mathematical problems that we can describe and solve,” Chris Gerdes, a Stanford engineering professor and the program director at CARS, said in an interview. “And so it’s really up to manufacturers, then, if they are interested in adopting some of these ideas, in whole or in part.”

So far, Stanford’s partners–including Audi, Volkswagen, Ford and Nissan–have been more interested in other aspects of the software, Gerdes said. But that’s starting to change now, as Gerdes and Lin have been raising awareness about ethics.

Gerdes noted that the framework doesn’t actually define what’s morally right and wrong. In other words, the dirty work of answering centuries-old philosophical debates would still have to be done by auto makers. “The question of what ethical framework is right is something that we don’t really have an answer for,” he said. “That’s where I think the discussion needs to take place.”

Noah Goodall, a researcher at the Virginia Center for Transportation Innovation and Research, has started thinking about how cars might actually distinguish right from wrong. In a paper due to be published this year, Goodall proposes a series of steps for programming robotic cars, designed to get more complex as technology advances.

In the first phase, driverless cars would simply try to minimize overall damage, guided by some basic principles. Cars may, for instance, prioritize multiple injuries over a single death–perhaps answering the anecdote at the top of this story–and property damage over human harm. If a car faces a choice between inflicting two similar injuries, it may be able to take inspiration from the medical field, which, for instance, assigns allocation scores to organ transplant recipients.

But Goodall said in an interview that these rules probably won’t cover every circumstance. That’s why he’s proposed a second phase, in which driverless cars learn ethics through simulations of crashes and near-crashes. “Humans would score potential actions and results as more or less ethical, and would be allowed to score outcomes without the time constraint of an actual crash,” Goodall writes in his paper.

In the final phase, Goodall expects that computers would be able to explain their decision-making back to us with natural language, so we can tweak their thinking accordingly. It’d be a lot like teaching morality to a child as it grows into an adult.

Even with enough supporting research, it’s unclear who would be responsible for coming up with ethical rules. Auto makers could devise their own standards, or pass the buck onto the insurance industry or lawmakers. In any case, experts agree that the industry will eventually have to conform to standards for how vehicles behave.

“Is it possible that there will be some mandatory code that makes certain decisions uniform or consistent across vehicle types? I would say that’s very likely,” said Robert Hartwig, President and Chief Economist for the Insurance Information Institute.

Hartwig believes public policy makers will ultimately be the ones to require ethical standards. He noted that the aviation industry already relies on similar standards for things like crash avoidance, as defined by international regulations.

“Will there be tussles in terms of what auto makers want to see, what software manufacturers want to see, maybe even what drivers want to see? Possibly, yes,” Hartwig said. “This is not going to be a smooth process, but … in the end, the benefit is going to be clear: fewer accidents, fewer injuries, fewer deaths on the road.”

It’s possible that some of these ethical concerns are overblown, and that as long as the net number of lives saved goes up significantly, most people won’t care about a few fringe disasters. The risk, Lin said, is that just a handful of unfortunate cases could equal major payouts by auto makers in liability lawsuits. In turn, these could become setbacks for further developments.

And on a broader level, driverless cars will be the first instance of robots navigating through society on a large scale. How we end up perceiving them could have a huge impact on whether robots become a bigger part of our lives.

“These are going to be driving next to your family and through your streets,” Lin said. “This industry is really going to set the tone for all of social robotics, so they really need to get it right.”

More Must-Reads From TIME

Contact us at letters@time.com