AI Music
Yona, the AI creation of the musician Ash Koosha, performs at the 2019 MUTEK festival in Montreal.  Auxuman

'There's a Wide-Open Horizon of Possibility.' Musicians Are Using AI to Create Otherwise Impossible New Songs

<!-- wp:gutenberg-custom-blocks/featured-media {"id":5774749,"url":"https://api.time.com/wp-content/uploads/2020/01/Screen-Shot-2019-08-27-at-17.19.54.png","caption":"Yona, the AI creation of the musician Ash Koosha, performs at the 2019 MUTEK festival in Montreal.","credit":"Auxuman"} -->
AI Music
<!-- /wp:gutenberg-custom-blocks/featured-media --><!-- wp:paragraph -->

In November, the musician Grimes made a bold prediction. “I feel like we’re in the end of art, human art,” she said on Sean Carroll’s Mindscape podcast. “Once there’s actually AGI (Artificial General Intelligence), they’re gonna be so much better at making art than us.”

<!-- /wp:paragraph --><!-- wp:paragraph -->

Her comments sparked a meltdown on social media. The musician Zola Jesus called Grimes the “voice of silicon fascist privilege.” Majical Cloudz frontman Devon Welsh accused her of taking “the bird’s-eye view of billionaires.” Artificial intelligence has already upended many blue collar jobs across various industries; the possibility that music, a deeply personal and subjective form, could also be optimized was enough to cause widespread alarm.

<!-- /wp:paragraph --><!-- wp:paragraph -->

But there are many musicians who feel that the onset of AI won’t end human art, but spur a new golden era of creativity. Over the past several years, several prominent artists, like Arca, Holly Herndon and Toro y Moi have worked with AI in order to push their music in new and unexpected directions. Meanwhile, a host of musicians and researchers across the world are developing tools to make AI more accessible to artists everywhere. While obstacles like copyright complications and other hurdles have yet to be worked out, musicians working with AI hope that the technology will become a democratizing force and an essential part of everyday musical creation.

<!-- /wp:paragraph --><!-- wp:paragraph -->

“It’s provided me a sense of relief and excitement that not everything has been done — that there’s a wide-open horizon of possibility,” Arca, a producer who’s worked with Kanye West and Björk on groundbreaking albums, told TIME in a phone interview.

<!-- /wp:paragraph --><!-- wp:paragraph -->

Artificial intelligence and music have long been intertwined. Alan Turing, the godfather of computer science, built a machine in 1951 that generated three simple melodies. In the 90s, David Bowie started playing around with a digital lyric randomizer for inspiration. At the same time, a music theory professor trained a computer program to write new compositions in the style of Bach; when an audience listened to its work next to a genuine Bach piece, they couldn’t tell them apart.

<!-- /wp:paragraph --><!-- wp:paragraph -->

Progress in AI music field has rapidly accelerated in the past few years, thanks in part to devoted research teams at universities, investments from major tech companies and machine learning conferences like NeurIPS. In 2018, Francois Pachet, a longtime AI music pioneer, spearheaded the first pop album composed with artificial intelligence, Hello, World. Last year, the experimental singer-songwriter Holly Herndon received acclaim for Proto, an album in which she harmonized with an AI version of herself:

<!-- /wp:paragraph --><!-- wp:embed {"url":"https://www.youtube.com/watch?v=r4sROgbaeOs","type":"video","providerNameSlug":"youtube"} -->

https://www.youtube.com/watch?v=r4sROgbaeOs

<!-- /wp:embed --><!-- wp:paragraph -->

But while the technology has come a long way, many say that we’re still far from an AI creating hit songs on its own. “AI music is simply not good enough to create a song that you will listen to and be like, ‘I would rather listen to this than Drake,’” says Oleg Stavitsky, the CEO and co-founder of Endel, an app which generates sound environments. Case in point: “Daddy’s Car,” a 2016 AI-penned song meant to mimic the Beatles, is a frustrating jumble of psychedelic rock tropes that fails to come together in a meaningful way.

<!-- /wp:paragraph --><!-- wp:paragraph -->

Perhaps in part due to these limitations, few straight-ahead pop songs are being created by AI. Instead, much more intriguing progress is being made in two seemingly diametrically opposed streams of music: the functional and the experimental.

<!-- /wp:paragraph --><!-- wp:heading {"level":3} -->

Meeting Demands

<!-- /wp:heading --><!-- wp:paragraph -->

On one side of the spectrum, AI music has become an answer to a simple demand: more music is needed than ever, thanks to a ballooning number of content creators on streaming and social media platforms. In the early 2010s, the composers Drew Silverstein, Sam Estes, and Michael Hobe were working on music for Hollywood films like The Dark Knight when they found themselves deluged with requests for simple background music for film, TV or video games. “There would be so many of our colleagues who wanted music that they couldn’t afford or didn’t have time for — and they didn’t want to use stock music,” says Silverstein.

<!-- /wp:paragraph --><!-- wp:paragraph -->

So the trio created Amper, which allows non-musicians to create music by indicating parameters like genre, mood and tempo. Amper’s music is now used in podcasts, commercials, and videos for companies like Reuters. “Previously, a video editor would search stock music and settle for something sufficient,” Silverstein says. “Now, with Amper, they can say, ‘I know what I want, and in a matter of minutes, I can make it.'” And when the company ran a recent Turing-like test, they found that, just like with the AI-generated Bach composition, consumers couldn’t tell the difference between music composed by humans and that composed by Amper’s AI.

<!-- /wp:paragraph --><!-- wp:paragraph -->

Endel was likewise created to fill a modern need: personalized soundscapes. Stavitsky realized that while people are increasingly plugging into headphones to get them through the day, “there’s no playlist or song that can adapt to the context of whatever’s happening around you,” he says. His app takes several real-time factors into account — including the weather, the listener’s heart rate, physical activity rate, and circadian rhythms — in generating gentle music that’s designed to help people sleep, study or relax. Stavitsky says users have successfully used Endel to combat ADHD, insomnia and tinnitus; a company representative said the app reached one million downloads at the end of January.

<!-- /wp:paragraph --><!-- wp:paragraph -->

Both Amper and Endel turn non-musicians into sonic curators, allowing them to become involved in a process they might have been shut out of due to lack of training or background. This year, Silverstein says, Amper will launch a consumer-friendly interface so that anyone, not just companies, can use it to create songs. “Billions of billions of individuals who might not have been part of the creative class now can be,” he says.

<!-- /wp:paragraph --><!-- wp:heading {"level":3} -->

Pushing Music Forward

<!-- /wp:heading --><!-- wp:paragraph -->

Of course, creating simple ditties or glorified white noise is far different from creating great music. This is one of the main concerns that many have about AI in music: that it could flatten music into functional and generic sounds until every song sounds more or less the same. What if major labels use AI and algorithms to cram simplistic earworms down our aural cavities from now until the end of time?

<!-- /wp:paragraph --><!-- wp:paragraph -->

But musician Claire Evans of the Los Angeles-based electropop band YACHT says that sort of craven optimization already sits at the heart of the music industry: “That algorithm exists and it’s called Dr. Luke,” she says, referring to the once omnipresent producer who creates pop hits through specific formulas. It’s the job of forward-thinking musicians, then, to wield the technology for the exact opposite purpose: to push against standardization and explore uncharted territory they could not have conjured on their own.

<!-- /wp:paragraph --><!-- wp:paragraph -->

For their most recent album Chain Tripping, YACHT trained a machine learning system on their entire catalog of music. After the machine spit out hours of melodies and lyrics based on what it had learned, the band culled through its output and spliced together the most intriguing bits into coherent songs. The result was a jumpy and meandering interpretation of dance pop that was strange to listen to and even stranger to play.

<!-- /wp:paragraph --><!-- wp:paragraph -->

“I think often musicians underestimate how much the way we play is based on our physical experiences and habits,” Evans says. She says it took the band many excruciating hours to learn the new music, because many riffs or chord changes would deviate just slightly from the ones they had relied on for decades. “AI forced us to come up against patterns that have no relationship to comfort. It gave us the skills to break out of our own habits,” she says. The project resulted in the first Grammy nomination of YACHT’s two-decade career, for best immersive audio album.

<!-- /wp:paragraph --><!-- wp:embed {"url":"https://www.youtube.com/watch?v=8IVW6HCwThQ\u0026amp;feature=youtu.be","type":"video","providerNameSlug":"youtube"} -->

https://www.youtube.com/watch?v=8IVW6HCwThQ&feature=youtu.be

<!-- /wp:embed --><!-- wp:paragraph -->

For the British-Iranian musician Ash Koosha, working with AI ironically led to an emotional breakthrough. He developed an AI pop star, named Yona, that writes songs through generative software. And while many of her lyrics are vague and nonsensical, some of them were shockingly vulnerable. Koosha was especially astounded by the line “The one who loves you sings lonely songs with sad notes.” “Being so blunt and so open — so emotionally naked — is not something most humans can do,” Koosha told TIME. “I wouldn’t be able to be that honest unless something triggers me.”

<!-- /wp:paragraph --><!-- wp:paragraph -->

In Berlin, the hacker duo Dadabots is hard at work creating musical chaos and disorientation with AI’s help. Berlin has become one of the global centers of AI experimentation (Endel is also based there) and Dadabots is currently in the midst of a residency in which they’re workshopping new tools with avant-garde songwriters and running AI-generated death metal livestreams. Co-founder CJ Carr says that AI acts both as a trainer for musicians — “like how a chess AI can help you improve your game,” he says — and a formidable radical creator. Dadabots’ neural networks have spit out ominous whispers, guttural howls, and furiously choppy drum patterns. For Carr, the weirder, the better. “When music is messed up, that’s better for music,” he says. “I want to see expressions and emotions and sounds that have never existed before.”

<!-- /wp:paragraph --><!-- wp:paragraph -->

For other creators, AI is not just a path forward, but also a link to a forgotten past of pre-recorded music. Last summer, a new version of the 2012 cult classic “Jasmine” by Jai Paul appeared online. While the first few bars sound the same as the original, the track gradually begins to mutate, with slippery guitar licks and syncopated hand claps drifting and out. The song continues for as long as you are listening — a taut band seemingly locked into an infinite, infectious jam session.

<!-- /wp:paragraph --><!-- wp:paragraph -->

But the track is AI-generated, a project from London-based company Bronze. Its creators, musicians Lexx and Gwilym Gold and scientist Mick Grierson, hoped to create a piece of technology that would dislodge music from the static and fossilized nature of recordings. “We wanted a system for people to listen to music in the same state it existed in our hands — as a constant, evolving form,” Gold told TIME.

<!-- /wp:paragraph --><!-- wp:paragraph -->

Jai Paul shares a label, XL, with Arca, one of pop music’s foremost transgressors (she worked on Kanye West’s Yeezus, Björk’s Vulnicura and FKA Twigs LP1). When the Venezuelan-born Arca learned about Bronze’s work, she was intrigued about how it could connect live and recorded music in an unprecedented way. “When you publish an album, that’s the way people will hear it forever more. When you play a song live, it’s unpredictable and ephemeral,” she says. “Working with tech like Bronze allows for this third thing. Lexx and I got really excited talking about what it means for the way people can listen to music, and the course of the industry.”

<!-- /wp:paragraph --><!-- wp:paragraph -->

Arca and the Bronze team soon began collaborating on an installation by the French artist Philippe Parreno that currently resides in the newly reopened lobby of New York’s Museum of Modern Art. The music creaks and burbles out of swiveling speakers that seem to move along with you. The output changes with the temperature and crowd density, meaning no two days in the space will be the same.

<!-- /wp:paragraph --><!-- wp:paragraph -->

Arca says that listening to the music that she ostensibly composed is a strange and gripping experience. “There’s something freeing about not having to make every single microdecision, but rather, creating an ecosystem where things tend to happen, but never in the order you were imagining them,” she says. “It opens up a world of possibilities.” She says that she has a few new music projects coming this year using Bronze’s technology.

<!-- /wp:paragraph --><!-- wp:heading -->

“Still in the Very Beginning”

<!-- /wp:heading --><!-- wp:paragraph -->

While creators like Arca use AI to push creativity forward, many worry that the technology will displace musicians from their jobs. But Koosha says that this type of fear has accompanied every technological development of the last century. “It reminds me of the fear people had in the 70s — when guitar players started a movement to break any synthesizer they found,” he says. While some guitarists or drummers might have been displaced, a whole generation of home producers arose thanks to the lower barrier to entry, and hip-hop, house music and entire new vocabularies and sonic aesthetics rose to the fore.

<!-- /wp:paragraph --><!-- wp:paragraph -->

And Francois Pachet, the director of Spotify’s Creator Technology Research Lab, says that we’re still very much in the early days of experimentation with AI music. “The amount of music produced by AI is very little compared to the activity on the research side,” he says. “We are still in the very beginning.”

<!-- /wp:paragraph --><!-- wp:paragraph -->

When more AI creations are released, there will be inevitable legal battles. Existing copyright laws weren’t written with AI in mind, and are extremely vague about whether the rights to an AI song would be owned by the programmer who created the AI system, the original musician whose works provided the training data, or maybe even the AI itself. Some are worried that a musician would have no legal recourse against a company that trained an AI program to create soundalikes of them, without their permission.

<!-- /wp:paragraph --><!-- wp:paragraph -->

But until those issues arise, musicians around the world are working as hard as they can to get their tools into the hands of as many curious music-makers as possible. “I want to see 14 year old bedroom producers,” Carr says, “inventing music that I can’t even imagine.”

<!-- /wp:paragraph -->

TIME may receive compensation for some links to products and services on this website. Offers may be subject to change without notice.