Illustration by Peter Reynolds for TIME
Ideas
Crawford is senior principal researcher at Microsoft Research, a professor at USC Annenberg and author of Atlas of AI

Artificial intelligence is now one of the most concentrated industries in the world. Dominated by a handful of tech giants and deployed at a planetary scale, AI already influences high-stakes social institutions in education, criminal justice, hiring and welfare. AI is remapping and intervening in the world, expanding wealth inequality and power asymmetries. But so far the sector has primarily escaped regulation, despite affecting the lives of billions of people, even when its products are unproven or potentially harmful.

The COVID-19 pandemic has accelerated this. Many AI companies are now pitching emotion recognition tools (ERTs) for monitoring remote workers and even schoolchildren. These systems map the “micro-expressions” in people’s faces from their video cameras. Then they predict internal emotional states drawn from a short list of supposedly universal categories: happiness, sadness, anger, disgust, surprise and fear.

This industry is predicted to be worth $56 billion by 2024, and yet there is considerable scientific doubt that these systems are accurately detecting emotional states at all. A landmark 2019 review of the available research found no reliable correlation between facial expression and genuine emotion. “It is not possible to confidently infer happiness from a smile, anger from a scowl, or sadness from a frown,” the review stated. Even so, AI companies have built upon this “universal emotion” theory as a means to do human analysis at scale. ERTs are now being used in job interviews, in classrooms, in airport security and in law enforcement.

Resistance to this highly controversial technology is growing; the influential Brookings Institute released a publication in early August suggesting ERTs be banned completely from use by law enforcement, highlighting their lack of reliability and the dangers they pose to civil liberties. The European Union is the first to attempt an omnibus proposal to regulate AI. But the draft AI act has its pitfalls. It would, for example, ban most “real-time” biometric ID systems—but fails to define what, exactly, real-time means. As scholars Michael Veale and Frederik Borgesius have observed, a CCTV system that simultaneously runs facial-recognition software would be illegal, but one that analyzes faces in footage after an event, like a political protest, would be fine.

Clearly, we need far stronger protections that address the corrosive effects on society of this kind of technology. Too many policymakers fall into the trap of what University of Chicago academic Alex Campolo and I have labeled “enchanted determinism”: the belief that AI systems are both magical and superhuman—beyond what we can understand or regulate, yet deterministic enough to be relied upon to make predictions about life-changing decisions.

This effect drives a kind of techno-optimism that can directly endanger people’s lives. For example an ongoing review published in the British Medical Journal looked at 232 machine-learning algorithms for diagnosing and predicting outcomes for COVID-19 patients. It found that none of them were fit for clinical use. “I fear that they may have harmed patients,” said one of the authors of the study.

The growth of AI might seem inevitable, but it is being driven by a small, homogeneous group of very wealthy people based in a handful of cities without any real accountability. To contend with AI as a political, economic and cultural force, then, we urgently need stronger scientific safeguards and controls. Many countries around the world have robust regulations to enforce scientific rigor and thorough testing when developing medicines and vaccines. The same should be true for AI systems, especially those that are already having a direct impact on people’s lives.

More Must-Reads from TIME

Contact us at letters@time.com.

EDIT POST