The singer-songwriter Holly Herndon cannot sing in Spanish. She cannot nail melismatic Arabic vocal runs across multiple octaves. She certainly cannot sing any song you want, on demand, to wherever you are in the world.

But her digital twin, Holly+, can do all of that. Working with technologists, Herndon created a vocal deepfake of herself in 2021 by extensively training a neural network on her voice. Now, any amateur musician can use Holly+ to transform their pedestrian voice into hers, perfectly tuned and ethereal.

The idea of handing over your voice for public manipulation might sound dystopian, a gesture of human surrender to our new machine overlords. But Herndon’s intent is the opposite. She created Holly+ to spur her fellow artists to reclaim agency over their careers and creative autonomy in the midst of a technological revolution she feels will dramatically shift how we make and process art. The world is barreling toward an era of “infinite media,” Herndon says, where anyone can rap as Drake or paint as van Gogh. This makes it all the more crucial to give artists the power to determine what happens with their likenesses and voices.

Herndon, 43, is on the front lines of these debates—creating new art with AI tools, while carving out protections for artists. This year, the Berlin-based Herndon co-created a template for allowing artists to opt out of AI training datasets—and two major AI companies, Stability AI and Hugging Face, agreed to heed those requests going forward. If further protections are ensured, Herndon believes that AI could unlock new, unprecedented waves of creativity: “I think it’s a huge opportunity to rethink what the role of the artist is.”

Illustration by TIME; reference image courtesy of Holly Herndon, photograph by Boris Camaca

Herndon has long pushed boundaries around the intersection of art and technology. She holds a Ph.D. from Stanford’s Center for Computer Research in Music and Acoustics. In 2019, she created an album, Proto, with the help of her partner Mat Dryhurst and an AI trained on human voices and instrumental samples. Herndon and Dryhurst also host the popular podcast Interdependence, in which they interview bleeding-edge technologists and philosophers, and proselytize for web3 culture, a crypto-forward vision of the internet.

The couple’s forays into these arenas come in part from a revulsion for the current model of streaming. Herndon says platforms like Spotify are making it harder to earn a living as a musician, and are having a flattening, dulling effect on global music, especially non-pop music that isn’t calibrated for streaming. “It’s a very one-size-fits-all for every genre and every kind of music. It’s all valued at the same per-stream rate, and is supposed to live on the same store with the same interface. To me, this is insane,” she says.

As musicians scrap for fractions of pennies on streaming services, their livelihoods are further threatened by vocal deepfakes. In April, an anonymous artist used an AI filter to simulate the voices of Drake and the Weeknd on the song “Heart on My Sleeve.” It went viral, with some commenters writing that they preferred the song to the actual work of the two superstars. TikTok has since been flooded with vocal deepfakes created without consent or revenue sharing for the original performer: AI Frank Sinatra singing profane Lil Jon’s “Get Low,” for instance.

Herndon started working on Holly+ long before the current mainstream generative-AI craze. She describes the AI as a “provocation” and an attempt to “use technology to allow us to be more human together.” While relinquishing control over one’s own voice might sound terrifying, Herndon says it actually ended up being freeing. “It was so beautiful to see myself conveyed through someone else’s expression,” she says. “It’s actually a thousand times more interesting than me trying to hypercontrol these things all the time.”

Herndon is also using Holly+ to experiment with new revenue models. Herndon set up a governance board, which now includes hundreds of people, who approve the release of “official” Holly+ songs while rejecting offensive or tasteless songs created in her voice. The approved songs are then sold as NFTs. So far, 71 Holly+ songs have been officially released on the NFT platform Zora, and most of them have been sold. Herndon herself earns 10% of profits, half goes to the original creator, and the rest is split among the governing board.

Herndon doesn’t hope that every musician follows her lead in releasing their voice for public use. But she does hope everyone seriously considers their options at a watershed moment before copyright challenges snowball, deepfakes of dead artists start flooding the airwaves, and anyone possesses the power to create nearly any sound or song by simply prompting an AI.

“When anyone can be like, ‘I want synth pop but Aretha Franklin’ and get this perfect output, it will no longer be something that one values from a musician,” she says. “So maybe musicians have to find a new way to find a voice in that sea of media and noise.”

Herndon is also on the forefront of protecting visual artists in this new AI world. In September 2022, she and her organization Spawning, which also includes Dryhurst and the technologists Patrick Hoepner and Jordan Meyer, created the website, which allows artists to see if their images were included in the vast training datasets that underpin AI art models. The artists could then signal their desire to opt out of such training sets, provided the AI companies themselves agree. As of August, 1.4 billion images have been requested for opt-out. More importantly, several companies have agreed to honor these requests for future AI models, most notably Stability AI.

Some critics say that building an opt-out AI standard is far worse than an opt-in one and that AI companies should ask for proactive consent for every single image they put in their training datasets. But “this only really makes sense in Twitter reality,” says Herndon. “From conversations we have had across industry and policy circles, that option is not on the table, anywhere.”

Herndon understands the risks but remains optimistic that if conscientious innovators can become central players, they can create an AI future with more protections and an ethical framework. “We are trying to build a consent layer for how data is used in AI training,” she says. “And so far, everyone we’ve talked to, including people building large models, are really interested and invested in having clean data.”

More Must-Reads from TIME

Contact us at

Sougwen Chung
Yann LeCun
Abeba Birhane
Audrey Tang