Saffron Huang and Divya Siddarth

Co-founders, Collective Intelligence Project

3 minute read

Today, building AI is as much an infrastructural challenge as it is a scientific one: it requires the marshaling of vast quantities of computer hardware, data, and human talent. This means that only the world’s wealthiest tech companies–or startups partnered with them–are battling it out to build artificial general intelligence, a theoretical advanced technology intended to surpass human cognitive abilities, which may one day put most people out of work. 

To Divya Siddarth and Saffron Huang, that doesn’t seem to be the recipe for a very democratic future. The two founded the nonprofit Collective Intelligence Project (CIP) to grapple with this tricky reality. Core to their work is the central question of how ordinary people can have a say in the way transformative technologies like AI are built.

Siddarth and Huang hold two ideas simultaneously. First: that AI can help us unlock new ways of harnessing our collective intelligence, for example, by helping us find better ways of cooperating at scale towards shared goals. Second: that the incentive structures already built into the tech industry can be bad for society at large. We need new techniques for governance, they believe, but not if those techniques are used only to supercharge the companies amassing wealth and power at the expense of the rest of us.

In other words, create a better world while undoing the one already in place. Siddarth and Huang rolled up their sleeves. In 2023, they convinced AI company Anthropic to help them run what they called an “alignment assembly.” In a seminal study, they brought together 1,000 everyday people to write a list of values that an ideal AI assistant should possess. The participants discussed issues related to discrimination, whether or not AI should value the common good, and even how “toxic” AI should be allowed to act. Anthropic researchers then tested whether the rules those people agreed upon could effectively govern their chatbot, Claude. The research found that doing so made Claude slightly less biased without negatively impacting its performance.

Anthropic even took a principle originating from that public input process—that chatbots should ensure responses are considerate of and accessible to people with disabilities—and added it to the list of rules currently followed by the live version of Claude. In September, Huang left the CIP to join the team that carried out this work at Anthropic. 

Before going, she and Siddarth spent a stint at the U.K.’s AI Safety Institute, a government body tasked with addressing the societal challenges of adopting artificial intelligence. At the Institute, they set up a program for soliciting public comment on what types of risks and dangers it should test for in tech companies’ new AI releases. 

There’s still a big gap between those limited public consultations and a tech industry that genuinely reflects the will of the people. But Siddarth and Huang are optimistic about the future of AI governance. “People can be so exceptionally good, compassionate, generous, and brave,” Siddarth says. But she’s aware this is not always true—the world can also be “transactional and cruel.” Designing better governance systems using collective intelligence, she believes, “is about bringing out the best and canceling out the worst.”

*Disclosure: Investors in Anthropic include Salesforce, where TIME co-chair and owner Marc Benioff is CEO.

More Must-Reads from TIME

Write to Billy Perrigo at billy.perrigo@time.com