Robots might not need rights, but they'll need to know right and wrong
+ READ ARTICLEThe potential benefits of success in AI are huge: AI can be a tool to magnify our intelligence, allowing us to solve problems of poverty, disease, conflict, and ecological degradation, and to usher in an age of peace and plenty. Yet headlines such “’Sociopathic’ robots could overrun the human race within a generation” might lead us to think just the opposite. What’s really going on?
Our concern about the future AI is a consequence of rapid progress in the field combined with a problem as old as King Midas, the mythical leader who got his wish that everything he touched would turn to gold—with intelligent machines, what you ask for is what you get. As the mathematician Norbert Wiener, the father of control theory, wrote in 1960: “If we use, to achieve our purposes, a mechanical agency with whose operation we cannot interfere effectively … we had better be quite sure that the purpose put into the machine is the purpose which we really desire.”
[time-brightcove not-tgx=”true”]
This purpose becomes more important as machines become more intelligent. For machines, “intelligent” usually means “better at carrying out the purpose that we humans supply.” For humans, it also means “better at understanding what human purposes really are.” Researchers in AI are only just beginning to grapple with this second problem.
I’m optimistic that we will make progress, for two reasons. First, human actions reveal information about our purposes and desires, and there is plenty of data about human actions— almost everything ever written or filmed is about human actions and our responses to them. Second, there are very strong economic incentives to solve this problem long before we reach the level of super-intelligent machines.
A lowly domestic robot that doesn’t understand human values may do something silly—like cooking the cat for dinner when the fridge is empty—and that will be the end of the domestic robot industry. Strange as it may seem, moral philosophy will become a key industry sector. The output could be quite instructive for the human race as well as for the robots.
Russell is a professor of computer science at UC Berkeley