Illustration by TIME; reference image: David J. Phillip—AP Photo

In early August, some 4,000 hackers gathered in Las Vegas to break chatbots from OpenAI, Google, and Anthropic. Hackers who were able to persuade the AIs to act in ways that broke their own rules—giving the recipe for anthrax, say, or spewing racist screeds—were rewarded with points. The winner found several vulnerabilities, including one where a chatbot revealed a credit card number that it had been told to keep secret.

The event was co-organized by Rumman Chowdhury, an AI ethicist and the founder of Humane Intelligence, a nonprofit that specializes in this so-called red teaming of AI systems. The practice takes pointers from hacker culture, where prizes for identifying security flaws by stress-testing computer programs are commonplace. The thinking is that by incentivizing many people to try as hard as possible to break chatbots and other forms of AI, developers can find and fix problems that could be much more dangerous if left to be discovered after their release to the public. The Biden Administration was a prominent backer of the Las Vegas event. “This independent exercise will provide critical information to researchers and the public about the impacts of these models, and will enable AI companies and developers to take steps to fix issues found in those models,” the White House said in a statement.


More from TIME


Chowdhury knows the dangers of AI better than most. Until Elon Musk fired her, she was the head of Twitter’s machine-learning ethics team, where she led research that found the company’s AI algorithms disproportionately boosted right-wing news sources during the 2020 U.S. elections. After ChatGPT was released, she asked the bot for information about her. It reported that she was a social media influencer with a large collection of shoes. “It’s not just that it was factually incorrect,” Chowdhury says. “It was very gendered.” (This interview has been condensed and edited for clarity.)

TIME: What is your biggest concern about the way that AI is being adopted right now?

Rumman Chowdhury: That it leads to an increased centralization of power and wealth into fewer and fewer people’s hands. For one, it is actively destroying certain industries and certain people’s livelihoods. And second is that these people don’t reflect the opinions, perspectives, needs, and wants of the actual majority of the world. And yet they impose their technology on all of the world. Their wealth is actually built on using what we have all freely put out into the world. So for example, what we’ve posted on Reddit, the pictures we have of ourselves on the internet, the conversations we’ve had with our friends on social media, our book reviews we’ve posted online—they’ve literally scraped and taken what we’ve all freely contributed to the internet to make it an interesting and novel place. And they’re monetizing it and charging us for the output that they gain wealth from, while also taking away the livelihoods of many different people.

What is the thing that you wish more people understood about AI?

That it’s not magic. It’s simply math, put into code. The first thing is, almost every issue that has come up with artificial intelligence has an analogue that’s been dealt with in some other way in the past. The second is that people think of programmers or AI developers as magical or uniquely capable entities that are smarter than everybody else. And that is also wildly untrue, and also often perpetuated by the people in these fields. These people cultivate a mystique around the technology they build with the sole purpose of excluding others. The third thing is, people are too scared to critique AI, to question it, or to ask if it’s doing the right thing. And they think it is better at making decisions than even an expert in a particular field could be.

What do even some experts get wrong about artificial intelligence?

This idea of sentience. It is amazing to me how many people, even if they’re an expert, confuse a user design that mimics chat functionality as sentience. Being so duped by a natural and easy-to-use chat interface that creates language that is reasonably readable, and just to be so easily fooled by that! We have not even figured out what consciousness comes from in humans.

More Must-Reads From TIME

Write to Billy Perrigo at billy.perrigo@time.com.

Sougwen Chung
Yann LeCun
Abeba Birhane
Audrey Tang
Grimes
EDIT POST