Illustration by TIME; reference image courtesy of Emily Bender

Emily M. Bender doesn’t consider herself an AI researcher. The University of Washington professor is, first and foremost, a linguist. But her gimlet-eyed research on the dangers of large language models and her withering cross-examinations of the AI hype cycle have made her one of the industry’s most formidable critics.

Long before ChatGPT came along, Bender emerged as something of a machine-learning myth buster, dispelling overblown promises about what AI can do and challenging the idea that these systems are intelligent at all. “You can’t expect a machine-learning system to learn stuff that’s not in its training data,” she says. “Otherwise you’re expecting magic.”

In a prescient 2020 paper, Bender and her co-author illustrated this point by likening large language models to an octopus listening in on a conversation between two people communicating via an underwater cable strung between desert islands. If the octopus spent a lot of time listening in, it might get good enough at predicting each side’s responses that it could cut the cable and impersonate one party to the conversation. But it wouldn’t understand what it was saying or hearing—which would be a problem if one of the islanders needed help fending off a bear attack. There is, Bender argues, “danger inherent in people thinking it understands.”

Bender has continued probing the immediate risks of large language models: They are energy guzzlers in a time of climate crisis. They exacerbate bias and confidently assert falsehoods as facts, polluting the information ecosystem. And they’re often much worse at parsing languages other than English, leaving non-English speakers at risk when these models are built into critical infrastructure, like emergency-response systems. Bender has pushed researchers in the field to at least name the languages they’re using to develop their models, so as not to give the false and potentially harmful impression that these systems work equally across the non-English-speaking world. Bender has beat this particular drum so vigorously, in fact, it’s now known as the “Bender rule.”

While Bender isn’t a policy wonk, in the past year, her work has helped shape lawmakers’ views on AI bias and the need for regulations. She’s also become a crucial foil against a growing chorus of alarmists warning that AI will soon become so sophisticated it will replace us all. To Bender, the real risk is in believing these systems are any more like us than an octopus is.

More Must-Reads from TIME

Contact us at

Sougwen Chung
Yann LeCun
Abeba Birhane
Audrey Tang