Clément Delangue is the CEO of Hugging Face, an open-source, for-profit machine-learning platform where researchers from around the world convene to share their AI models, datasets, and best practices. In an industry where the cutting edge is dominated by big tech companies, Delangue says efforts like Paris-based Hugging Face are an essential counterweight, helping distribute AI to a far broader base of users and developers. At the same time, Hugging Face can enforce a set of community standards that prevent harmful AI models from proliferating, he says. (Investors in Hugging Face include Salesforce, where TIME co-chair and owner Marc Benioff is CEO.)
But the rise of open-source AI threatens to be a double-edged sword—potentially putting powerful AI capabilities not just into the hands of people with good intentions, but also those who would abuse it. (This interview has been condensed and edited for clarity.)
More from TIME
TIME: A recent memo written by a Google AI researcher argued that the open-source community was fast overtaking both Google and OpenAI in terms of the most significant innovations in the field. What was your reaction to that memo?
Delangue: I don’t know if I really want to comment on that, because it was one opinion leaked from one company. But more broadly, I think it’s good that it’s being recognized that open science and open source is the basis of what AI has been built on in the past few years. Everyone has been building very collaboratively. I think it’s important for people to remember that, so that when we decide on next steps, we can decide if we want something that is controlled by a few companies, or a more collaborative approach. Because these are two different paths—two different futures in front of us.
There’s been a lot of discussion recently about the risks to do with building more complex and capable AIs. There are some arguments that distributing this kind of advanced AI more broadly is dangerous, and that the fewer big players you have, the easier it is to keep a short leash on these dangers. What do you think of that argument?
If you look at society, the biggest risk is actually to have power and understanding concentrated in the hands of a few. Especially for a technology like AI, and especially if these organizations are not designed to cater for the public good. Companies might have good intentions, but they are private profit-seeking organizations by nature. If you look at long-term healthy development of the technology, we believe that more democratization creates more counterpowers and fewer risks, because it empowers and enables regulation. Regulators can’t regulate something that they don’t understand, that they don’t have transparency into. In that way, we approach things a little bit differently to some other organizations.
What do you see as the role Hugging Face is playing in the AI community?
We’ve been lucky to become the most-used platform to share science, models, datasets, apps. We are a big proponent of more transparency, more openness, more distribution of power in terms of AI capabilities. That’s what we’re trying to push, while at the same time respecting the different approaches that companies take. On the Hugging Face platform now, you can release models that you only share with a subset of people—for example if you want to share a model only for research purposes. We’re trying to maximize safety and ethical releases, allowing for safer releases, while making sure we build a platform with value-informed processes within the organization.
- Donald Trump Is TIME's 2024 Person of the Year
- Why We Chose Trump as Person of the Year
- Is Intermittent Fasting Good or Bad for You?
- The 100 Must-Read Books of 2024
- The 20 Best Christmas TV Episodes
- Column: If Optimism Feels Ridiculous Now, Try Hope
- The Future of Climate Action Is Trade Policy
- Merle Bombardieri Is Helping People Make the Baby Decision