The next generation of artificial intelligence (AI) promises to have an impact as big as the mobile revolution or the Internet revolution before that. The positive opportunity before us is virtually boundless—but for AI to meet its vast potential, it will require the right approach.
AI is already revolutionizing our lives. It can detect patterns that humans can neither see nor anticipate. English speakers can make phone or video calls to speakers of Hindi or Chinese. But the next leap will be Inventive AI—machines trained on a given data set that can tackle a wider range of problems. As society grapples with the increasing volume and complexity of information, more-flexible AI will play a key role in helping us. Eventually it will be possible to give a computer unstructured data—say, spreadsheets used to manage business records—and receive quality advice on improving operations. All it will take is a training data set that is large enough, computers that are big enough and algorithms that are adaptable enough.
Imagine an adaptive learning system that analyzes medical records for hospital patients. AI could sort through a patient’s entire medical history in an instant, offering relevant information quickly to doctors—preventing a negative reaction to a medication, for instance. AI could also highlight a pattern of risk factors and allow the medical team to achieve better results through preventive care and early detection of disease. These benefits might not result from specific questions a doctor posed to the AI. The software would do it proactively, saving the doctor’s time—and maybe saving lives.
But though computers can assist us, they are not like us. We can make value judgments, think introspectively and, in truth, compare apples to oranges. Our wealth of experience gives us creativity—but it also makes us vulnerable to accumulating conscious and unconscious biases. In contrast, AI systems today receive their “training” using very specific collections of relevant data. These data sets can be large but are inherently much more limited than human experience. That can be a plus—AI does not have the complex emotions that guide human decisionmaking, so it could avoid most if not all of these inherent biases.
Based on the work of DeepMind, which is involved in AI research, we believe that makers of AI should adhere to the following principles. First, AI should benefit the many, not the few. In practical terms, AI has the potential to help the doctor and the patient, the business and the employee. As a society, we should make use of this potential and ensure that AI always aims for the common good.
Second, AI research and development should be open, responsible and socially engaged. As we continue developing AI, new questions will continue to arise, and we will need to answer them collaboratively, including everyone from engineers and scientists to philosophers and activists. In particular, those whose industries will change as a result of AI will need to be part of this global conversation.
Third, those who design AI should establish best practices to avoid undesirable outcomes. Is a system doing what we need? Are we training it using the right data? Have we thought through the way any system might yield unintended side effects—and do we have a plan to correct for this? There should be verification systems that evaluate whether an AI system is doing what it was built to do.
We are building tools that humans control. AI will reflect the values of those who build it. Ultimately, our dream for AI is to give people more choices about how they live their lives. Under our control, it can take the drudgery out of work and free up many more hours for creative pursuits. And applied collaboratively, AI could help bring about solutions to the world’s most complex problems.
In the end AI is technology, and technology is just a tool. It’s up to us to use that tool well—to harness its power to improve our lives, and the lives of people everywhere.
Schmidt is the executive chairman of Alphabet Inc.; Cohen is the director of Google Ideas
This appears in the December 28, 2015 issue of TIME.