Illustration by TIME; reference image courtesy of Sarah Chander, photograph by Andreea Belu

When the E.U. announced in June that it was taking key steps toward passing the AI Act, one of the world’s first major laws on the technology, Sarah Chander was already thinking about how far it would go in protecting communities of color.

Before becoming a senior policy adviser in 2020 at Brussels-based European Digital Rights (EDRi), a network comprising over 50 NGOs and experts on digital rights and freedoms, Chander focused on international law and antiracism advocacy. In college, she campaigned against immigration detention facilities in the U.K. “These were issues impacting me and my family and people that look like me,” says Chander, who is British Indian.

Now, Chander, 32, advises the E.U. on improving policy and legislation relating to AI, privacy, and surveillance—an issue that’s become more urgent in recent years as more governments deploy AI tools and infrastructure to surveil populations and control borders.

In response, Chander is singularly focused on ensuring these harms are addressed by the new AI Act—a likely model for policymakers around the world as they grapple with how to put guardrails on the rapidly advancing technology.

In the early stages of drafting the law, the E.U.’s focus appeared to be on the technical elements of regulating AI systems as products and classifying different AI applications according to the risk they posed to users before they were released to the public. “This was exactly what [EDRi] didn’t want,” Chander says, because it didn’t take into consideration whether the technology should even be used in some instances or against certain communities.

“We basically argued for a big shift away from a purely technical perspective to an accountability perspective so that we could look at AI systems not just as infrastructure or services, but as a whole system that fits into the social fabric of our society,” she says.

Chander and EDRi mobilized a coalition of 150 lawyers, activists, academics, and civil society organizations to demand greater transparency through publicly available databases about how these systems are used, as well as a legal framework of redress for those directly affected. And when it comes to cross-border migration, the technology shouldn’t be used to police migrants at all, says Chander, because it is “so wholly harmful that it cannot be improved.”

In June, the European Parliament accepted many of EDRi’s demands. It agreed on the need for a ban on facial recognition in public spaces and on predictive policing, along with more transparency and legal frameworks for the public. These developments have made Chander “optimistic” ahead of E.U. discussions in the fall. “We’re talking about the first AI legislation in the world, and these are complicated debates with many competing interests,” she says.

More Must-Reads From TIME

Write to Astha Rajvanshi at astha.rajvanshi@time.com.

Sougwen Chung
Yann LeCun
Abeba Birhane
Audrey Tang
Grimes
EDIT POST