Prior to the invention of Nightshade by University of Chicago computer science professor Ben Zhao and his team, artists could only request that developers not use their work to train AI models, but there was no incentive or enforcement to ensure they complied. Zhao’s tool helps “poison” artwork by convincing AI that a picture of a dog, let’s say, is actually a cat. These changes to a piece’s pixels are imperceptible to humans, but reliably fool machines. In just the first five days after it launched in January 2024, Nightshade was downloaded over 250,000 times. Since then, the application has amassed nearly a million downloads, Zhao says.
Zhao, 48, has become known for developing adversarial tools to fight technological overreach, having previously developed bracelets to keep smart speakers from spying on people and tools to help fool facial recognition systems. He says he’s seen a lot of “resentment and frustration” from artists whose livelihoods are being disrupted by image generation models.
“You really have to wonder what this means for the future of human creativity,” Zhao says. Without this pipeline of original human art, he worries that “these AI systems are going to be stuck in a static state, and we're going to be treated to the same content regurgitated over and over.”
The popularity of Zhao’s tool might make developers think twice before using art without the artist’s consent.
More Must-Reads from TIME
- Why Trump’s Message Worked on Latino Men
- What Trump’s Win Could Mean for Housing
- The 100 Must-Read Books of 2024
- Sleep Doctors Share the 1 Tip That’s Changed Their Lives
- Column: Let’s Bring Back Romance
- What It’s Like to Have Long COVID As a Kid
- FX’s Say Nothing Is the Must-Watch Political Thriller of 2024
- Merle Bombardieri Is Helping People Make the Baby Decision
Contact us at letters@time.com