Illustration by TIME; reference image courtesy of Joy Buolamwini

As the Biden Administration grapples with how to set up guardrails around rapidly developing AI technology, Joy Buolamwini joined the President for a closed-door roundtable in June. The founder of the Algorithmic Justice League (AJL) voiced concerns about facial recognition and biometrics already being used in policing, education, and health care.

Buolamwini, a Ghanaian-American-Canadian computer scientist and digital activist, founded the AJL in 2016. Based in Cambridge, Mass., the organization uses research and art to highlight the social impact and potential harms of AI.

“The rising frontier for civil rights will require algorithmic justice,” Buolamwini writes in her upcoming book, Unmasking AI: My Mission to Protect What Is Human in a World of Machines, which aims to show how racism, sexism, colorism, and ableism can result in many people being underrepresented and vulnerable to bias in the creation of algorithms.

More from TIME

Answering questions by email, Buolamwini explains why she calls herself a “poet of code,” breaks down the work the AJL is doing, and shares her thoughts on regulating AI. (This interview has been condensed and edited for clarity.)

TIME: What sparked your interest in AI bias?

Joy Buolamwini: As a graduate student at MIT, I noticed that the AI-powered projects that I built for coursework did not perform as well on my darker-skinned face as it did on the faces of my lighter-skinned classmates. These personal experiences inspired me to test AI products from major tech companies to see if what I experienced was indicative of larger biases or one-off experiences.

How have you used art and storytelling to raise awareness?

Through filmmaking, art exhibitions around the world, evocative audits of AI systems, my widely viewed talks, and now my memoir and AI explainer, Unmasking AI, I use storytelling to humanize the cost of AI, and the possibilities. You don’t need a Ph.D. from MIT to be part of the conversation about how AI shapes our lives. Research was not enough for me to teach everyday people whose life opportunities and pathways were being shaped by often faceless and nameless technology. So I began to find my voice as a poet of code, a person who could penetrate technical jargon to highlight the harms and equip people with knowledge to resist harmful uses of AI.

Can you give us some examples of the work the Algorithmic Justice League does?

AJL exists to prevent AI harms and increase accountability in the use of AI systems.

Our work started with research. We are perhaps most known for our research papers showing harmful bias from companies like Microsoft, IBM, and Amazon. All of these companies stopped providing their facial-recognition systems to law enforcement after our research uncovered bias in some of their commercial products. We also recognize the “ex-coded,” individuals who have been harmed by AI systems, with the Gender Shades Justice Award, which comes with a $5,000 unrestricted financial award. We gave the inaugural prize to Robert Williams, who was wrongfully arrested in front of his wife, two young daughters, and neighbors.

We noticed that currently, if people are harmed by AI systems, there are few places to go to share their experience. Which is why AJL is building an AI Harms reporting platform. We have done campaigns on the TSAs and IRSs uses of AI-powered facial recognition. Hundreds of travelers have shared their experiences with the TSA’s use of facial recognition. Many say they were unaware they could opt out, or felt shamed if they attempted to assert their right to opt out.

To invite people who might otherwise not view themselves as part of the tech conversation, we created the Drag Vs AI workshops, where participants alter their faces to explore the limitations of AI systems that attempt to read human faces.

What can governments do to regulate AI?

In the United States we still do not have federal regulation for AI-powered facial-recognition and biometric technology, the types of AI tools that led to the wrongful arrest and imprisonment of Porcha Woodruff, who was 8 months pregnant when detained. When she was released on a personal bond, she had to rush to the hospital, and the stress sent her into contractions and dehydration. It is far past time to pass federal legislation like the Facial Recognition and Biometric Technology Moratorium Act of 2023 so no one has to be worried about being falsely arrested because of facial recognition or iris scans. The White House released a Blueprint for an AI Bill of Rights. Now is the time to operationalize the principles outlined in the blueprint, including protections from algorithmic discrimination. I am encouraged by the movement in the E.U. to pass the AI Act, which includes a ban on the live use of facial recognition in public spaces and uses a risk-based framework to put restrictions on AI systems used in high-stakes domains like education and employment.

What is the one thing you wish more people understood about AI?

The future is never written in stone. We have agency to decide what the AI future will be. It is not too late to change the trajectory of the development of AI systems so the benefits are more evenly distributed and the burdens do not fall disproportionately on the already marginalized.

How is the current moment in AI different from what you would have predicted five years ago?

Sadly some of what I warned about years ago is becoming reality: false arrests, AI discrimination in health care, education, and employment. I am encouraged to see more media organizations using their platforms to show the present dangers of AI and highlighting real people being impacted and pushing back.

I wouldn’t have predicted Rolling Stone would feature five women who risked their careers and reputations to warn about the AI harm that now grabs headlines. People are thirsty to see themselves represented in conversation about the future of AI.

More Must-Reads from TIME

Contact us at

Sougwen Chung
Yann LeCun
Abeba Birhane
Audrey Tang