Presented By
Illustration by TIME; reference image courtesy of Yi Zeng

This post was updated at 3 p.m., September 8, 2023

In the first class of the introductory course on AI that Yi Zeng took as a sophomore in Beijing, the professor screened Steven Spielberg’s 2001 film A.I. Artificial Intelligence in full.

In one scene, two researchers discuss simulating a human brain in order to build a robot that could love. Yi was inspired. “This is what I wanted to [do with my] life. That is, to build a robot that can love the human species,” he tells TIME, speaking from the Chinese capital. Yi, 41, has spent his career in Beijing as a professor at the Chinese Academy of Sciences trying to build “brain-inspired intelligence”—AI systems designed to be as similar as possible to human brains—in the hope that they would have a sense of morality.

Around 2016, Yi became more concerned about risks posed by AI systems, and he began to spend more of his time working with policymakers to develop rules for beneficial AI development. Three years later, Yi directed the team that wrote the Beijing AI Principles. He tells TIME he is “highly involved in policymaking through national governance committees.”

Yi also pushes for greater international cooperation. “I feel it’s my duty to let the world know that scientists and policymakers from China are really having similar ideas,” he says. He helped develop UNESCO’s Recommendation on the Ethics of Artificial Intelligence, is involved in a number of informal, unofficial diplomacy initiatives, like the International Workshop on Cross-Cultural AI Ethics and Governance, and recently addressed a meeting of the U.N. Security Council.

The tense geopolitical climate and China’s use of AI for the mass surveillance and persecution of minorities has limited cooperation between China and the U.S. But Yi thinks that the countries have much in common in terms of attitudes toward risks from AI. A recent poll found that 62% of U.S. voters are concerned about AI, compared with just 21% who are excited about it. As part of his work with the Center for Long-Term Artificial Intelligence in Beijing, Yi surveyed the Chinese public and found that 91% of respondents supported mandatory safety and ethics frameworks for AI models. ”We don’t really have a choice,” he says. “We must collaborate.”

Correction, Sept. 8

The original version of this story mischaracterized Yi Zeng’s undergraduate education. He did not study at the Chinese Academy of Sciences.

More Must-Reads From TIME

Write to Will Henshall at

Sougwen Chung
Yann LeCun
Abeba Birhane
Audrey Tang