Illustration by TIME; reference image: Kimberly White—Getty Images

Timnit Gebru co-wrote one of the most influential AI ethics papers in recent memory, a journal article arguing that the biases so present in large language models were no accident—but rather the result of an intentional choice to prioritize speed over safety. Gebru lost her job in 2020 as the co-lead of Google’s ethical-AI team after she refused a demand to remove her name from the independently published paper. (Gebru says she was fired; Google says she resigned.)

Since those events, Gebru has become a torchbearer in the world of responsible AI. As founder and executive director of the Distributed AI Research Institute (DAIR), she has built a space for a kind of interdisciplinary AI research that is rare to see at Big Tech companies. DAIR’s hybrid model of research and community building has so far focused on two parallel tracks. On one track, Gebru and her colleagues interrogate the tech industry’s dependence on poorly paid, precarious workers, many of them from the Global South. “A lot of people want to imagine the machine is sentient and that there are no humans involved,” Gebru says. “That’s part of a concerted effort to hide what’s going on.”

On the other track, Gebru has dedicated herself to researching the ideological roots of some of the technologists attempting to build artificial general intelligence. Those ideologies—Gebru and her colleague Émile P. Torres coined the acronym TESCREAL as shorthand for a long list of obscure “isms”—not only have unsavory links to debunked pseudosciences like eugenics, the pair argue, but also predispose their followers to tolerate almost any means (rampant inequality, poverty, and worse) so long as the ends (humanity safely creating AI, for example) are made fractionally more likely. “Young computer-science students might feel like they have to follow this trajectory,” Gebru says, referring to tech companies further concentrating wealth and power, extracting natural resources and human labor, and widening surveillance-based business models. “It’s important to look at: What is the ideological underpinning? If you know that, you can ask, ‘What if I had a different ideological underpinning? How would technology be developed in that case?’”

More Must-Reads From TIME

Write to Billy Perrigo at billy.perrigo@time.com.

Sougwen Chung
Yann LeCun
Abeba Birhane
Audrey Tang
Grimes
EDIT POST