Illustration by TIME; reference image courtesy of Deborah Raji

In 2017, while interning at the machine-learning company Clarifai, Inioluwa Deborah Raji had an alarming realization.

Raji, 27, was helping the startup train a content-moderation model intended to filter out explicit images, but noticed that the model was disproportionately flagging content containing people of color that was not explicit. The data the model was being trained on lacked diversity, and the product was reflecting it. “As a default, a lot of the models that we developed had [data] where a pornographic image that was supposed to represent the [explicit] content was more diverse than the stock images that were supposed to represent safe content.” In other words, the program was filtering the world to be whiter than it actually is.

When she raised the issue, she learned that the lack of diversity wasn’t a fluke, but an industry norm. “If I would say we need more diverse data, the response I got was, it’s hard enough to get any data, why would we be thinking about making more complex choices?” (Clarifai did not respond to TIME’s request for comment.)

The experience prompted Raji to shift her focus away from the startup world and toward AI research, where she began to focus on how AI companies could ensure that their models do not cause undue harm—especially among populations that are likely to be overlooked during the development process. “It became clear to me that this is really not something that people in the field are even aware is a problem to the extent that it is,” she says.

Since then, Raji’s work has focused on developing methods to audit AI systems both within and outside of the companies creating them. She worked with Google’s Ethical AI team to introduce a more holistic internal assessment process for artificial intelligence systems. She also collaborated with the Algorithmic Justice League to create strategies for “external auditing” on its Gender Shades audit project, which evaluated the accuracy of AI-powered gender-classification tools created by IBM, Microsoft, and Face++ that were intended for use on anything from classifying images to identifying faces.

While many of the conversations about regulating AI imagine the technology being used in some far-off future, Raji says there needs to be a focus on the near term. When companies fail to properly assess AI systems before releasing them, the implications are real, she notes. Algorithms might misidentify suspects, leading to wrongful arrests, or perpetuate housing discrimination against people of color. “The reality of it is that a lot of the deployments are already starting—and are pretty haphazardly executed,” Raji says.

In the absence of governmental regulation, Raji notes that it’s currently up to developers to provide transparent evaluations of their products and the harms they might pose—though they might not be the most reliable sources. “A company’s product is evaluated on a benchmark that tells a very nice, clean story,” she says. They’re not mandated to protect their users on anything from privacy to communicating honestly about how well the system will work for that user.”

Through a fellowship with the Mozilla Foundation, a global nonprofit focused on internet protection, she’s been working on identifying open-source audit tools that would allow any stakeholder—from government officials to those likely to be impacted by the systems—to better understand and challenge products before they are implemented at broader scales.

When you start questioning these claims at a basic level, it just reveals that the whole question is much more complex than it seems on the surface,” she says. “It’s like opening up Pandora’s box.”

More Must-Reads From TIME

Write to Simmone Shah at simmone.shah@time.com.

Sougwen Chung
Yann LeCun
Abeba Birhane
Audrey Tang
Grimes
EDIT POST