Illustration by TIME; reference image courtesy of Margaret Mitchell

Margaret Mitchell spent three years as the co-lead of Google’s AI-ethics team, along with Timnit Gebru. But after the pair published a paper in 2020 arguing that large language models entrenched social inequalities partly because of corporations’ decisions to prioritize size over safety, Mitchell and Gebru say they were forced out. (Google said Mitchell was fired for violating its code of conduct, and said Gebru resigned.) Mitchell has since emerged as a chief critic of what she says is the lack of diversity and inclusion at AI companies, which often negatively impacts the quality of the technologies they build. Now, Mitchell is chief AI-ethics scientist at Hugging Face, a developer-facing AI startup. (Investors in Hugging Face include Salesforce, where TIME co-chair and owner Marc Benioff is CEO.) There, she is focused on ensuring that open-source AI carries as many benefits, and as few harms, as possible. (This interview has been condensed and edited for clarity.)

TIME: What about AI keeps you up at night?

Margaret Mitchell: It’s less about the technology itself and more about the culture that is creating it. There’s an insularity of thinking and it’s a mindset that disproportionately excludes women and people of color. Which means that every direction forward is a direction that is not well informed by a diversity of human values. And so while on the one hand they are starting to claim they want to make systems that are aligned to human values—which is great, by the way, women and people of color have been talking about this for 20 years—but doing it without understanding what value pluralism is, what it means to imbue values in a system, and only approaching it in an oversimplified mathematical way. So the apparent inability for some of the companies putting out this technology to reflect on how they’re creating the problems that they’re publicly saying they want to avoid or mitigate, it’s just infuriating. I don’t stay up at night worried, so much as infuriated.

When we talk about the risks and harms of AI, how worried are you that proliferation is going to be a large part of the dangers that arise from large language models, diffusion models, and other forms of AI? How much of the danger from AI comes from the concentration of power in the hands of a few, and how much of it comes from giving small, bad actors more power than they have previously had?

That’s a really common misunderstanding. Bad actors will bad act, be it a closed or an open system. And some of the most effective attacks have been within closed systems, in general, because these are things that are more deployed to the masses. People are constantly looking for security vulnerabilities [in open systems] and constantly fixing it. I agree with you, just to be clear, that malicious people can use open-source software to create bad things. That’s true. I’m not sure that there’s a net harm from bad actors. There’s a lot of reasons for that. The vector of bad behavior is via something like social media platforms, or news sites, or that sort of thing. And so in terms of controlling for bad actors, it’s the avenues of distribution that I think should continue to work on safeguards. But in general, with these open models [that could be capable of generating malicious material], it’s something that you can have on your small personal computer, and it’s not something massively deployed. To massively deploy it, you need to use some other service, and that other service is a limiting factor.

If you could change one thing about the way that people talk about AI right now, what would it be?

I wish that polysemy—the fact that the same word is used to mean two different things— wasn’t being abused. So learn means something different in machine learning than it does for human intelligence. That is being abused to pretend that they’re the same thing. People are misusing terminology to misdirect. And the public gets confused. That’s infuriating to see as someone who actually understands the language.

More Must-Reads From TIME

Write to Billy Perrigo at billy.perrigo@time.com.

Sougwen Chung
Yann LeCun
Abeba Birhane
Audrey Tang
Grimes
EDIT POST