On the evening of Feb. 6, as U.S. news networks reported the death of a doctor in Wuhan, China, who had warned of a deadly new virus, thousands of Americans were tuning in to a different kind of show.
“The good news is I heard actually that you can’t get this if you’re white,” Nick Fuentes, a far-right political commentator, told viewers on his “America First” channel on the streaming platform DLive. “You’re only really susceptible to this virus if you’re Asian,” Fuentes continued. “I think we’ll be O.K.”
Fuentes, 22, a prolific podcaster who on his shows has compared the Holo-caust to a cookie-baking operation, argued that the segregation of Black Americans “was better for them,” and that the First Amendment was “not written for Muslims,” is doing better than O.K. during the COVID-19 pandemic. He’s part of a loose cohort of far-right provocateurs, white nationalists and right-wing extremists who have built large, engaged audiences on lesser-known platforms like DLive after being banned from main-stream sites for spreading hate speech and conspiracy theories.
The model can be lucrative. Viewers pay to watch the livestreams through subscriptions and donations, and the platform allows the content creators to keep most of the revenue. Fuentes appears to have earned more than $140,000 off his DLive streams, cementing himself as the most viewed account on the platform, according to calculations provided to TIME by a livestreaming analyst who was granted anonymity because of their work tracking these accounts. Fuentes is hardly alone. Eight of the 10 top earners on DLive this year as ranked by Social Blade, a social-media analytics website, are far-right commentators, white-nationalist extremists or conspiracy theorists.
The social disruption and economic dislocation caused by the virus–as well as the nationwide protests and civil un-rest that followed the death of George Floyd in late May–has helped fuel this growing, shadowy “alt tech” industry. As public spaces shut down in March, millions of Americans logged online; the livestreaming sector soared 45% from March to April, according to a study by software sites StreamElements and Arsenal.gg. As people became more socially isolated, many increasingly turned to pundits peddling misinformation, conspiracy theories and hate speech. And even as mainstream platforms cracked down on far-right propagandists, online audiences grew. Over the past five months, more than 50 popular accounts reviewed by TIME on sites like DLive have multiplied their viewership and raked in tens of thousands of dollars in online currency by insisting COVID-19 is fake or exaggerated, encouraging followers to resist lockdown orders and broadcasting racist tropes during the nationwide protests over police brutality. Many of these users, including Fuentes, had been banned by major social-media platforms like YouTube for violating policies prohibiting hate speech. But this so-called deplatforming merely pushed them to migrate to less-regulated portals, where some of them have attracted bigger audiences and gamed algorithms to make even more money. In addition, clips of their broadcasts on less-trafficked sites still frequently make it onto YouTube, Twitter and other mainstream platforms, essentially serving as free advertising for their streams elsewhere, experts say.
As social-media giants like YouTube, Twitter and Facebook target hate speech and misinformation, sites like DLive seem to be turning a blind eye, former users and employees say, recognizing that much of their traffic and revenue comes from these accounts. “They care more about having good numbers than weeding these people out,” a former employee of DLive, who was granted anonymity because he still works in the livestreaming sector, tells TIME. (DLive did not respond to multiple requests for comment.)
Which means ordinary users on gaming and streaming platforms, many of them teenagers, are often one click away from white-nationalist content. Many of these far-right personalities allege they are being unfairly censored for conservative political commentary or provocative humor, not hate speech. Most of these viewers won’t respond to streamers’ often cartoonish calls to action, like the “film your hospital” movement in April meant to show that no patients were there, thus “proving” that COVID-19 was fake. But this murky ecosystem of casual viewers, right-wing trolls–and the occasional diehard acolyte–creates a real challenge for technology companies and law-enforcement agencies.
And it doesn’t take much to trigger a tragedy. Over the past two years, terrorists inspired by online right-wing propa-ganda have livestreamed their own deadly attacks in New Zealand and Germany. In March 2019, a Florida man who had been radicalized by far-right media and online conspiracy theorists pleaded guilty to sending more than a dozen pipe bombs to prominent critics of President Donald Trump. A month later, a gunman armed with an AR-15 shot four people, killing one, in a synagogue in Poway, Calif., after allegedly posting a racist and anti-Semitic screed on the site 8chan. About three months later, a man killed 23 people at a Walmart in El Paso, Texas, after posting a racist manifesto online, according to authorities.
With COVID-19 continuing to surge in parts of the country, ongoing protests over racial injustice and the upcoming 2020 U.S. presidential election, the next few months promise to offer fertile ground for bad actors in unmoderated virtual spaces. Far-right propagandists “are really capitalizing on this conspiratorial moment,” says Brian Friedberg, a senior researcher at the Harvard University Shorenstein Center’s Technology and Social Change Project. “Everyone’s locked inside while there is what they refer to as a ‘race war’ happening outside their windows that they are ‘reporting on,’ so this is prime content for white-nationalist spaces.”
The migration of far-right personalities to DLive illustrates how, despite mainstream platforms’ recent crack-downs, the incentives that govern this ecosystem are thriving. Anyone with an Internet connection can continue to leverage conspiracy theories, racism and misogyny for attention and money, experts say.
The outbreak of COVID-19 arrived during a period of reinvention for far-right propagandists in the aftermath of the white-nationalist “Unite the Right” rally in Charlottesville, Va., in 2017. Over the past three years, social-media giants, which had endured criticism for giving extremists safe harbor, have increasingly attempted to mitigate hate speech on their sites. Facebook, YouTube and Twitter, as well as payment processors like PayPal and GoFundMe, have all shut down accounts run by far-right agitators, neo-Nazis and white supremacists. In late June, YouTube removed the accounts of several well-known figures, including David Duke, a former leader of the Ku Klux Klan, and Richard Spencer, a prominent white nationalist. Reddit, Facebook and Amazon-owned streaming site Twitch also suspended dozens of users and forums for violating hate-speech guidelines.
But these purges hardly solved the problem. Many online extremists were on main-stream platforms like YouTube long enough to build a devoted audience willing to follow them to new corners of the Internet. Some had long prepared for a crackdown by setting up copycat accounts across different platforms, like Twitch, DLive or TikTok. “These people build their brand on You-Tube, and when they get demonetized or feel under threat they’ll set up backup channels on DLive or BitChute,” says Megan Squire, a computer scientist at Elon University who tracks online extremism. “They know it’s going to happen and plan ahead.”
While the suspensions by social-media companies have been effective at limiting the reach of some well-known personalities like conspiracy theorist Alex Jones, who was banned from YouTube, Facebook and Apple in 2018, others have quickly adapted. “Content creators are incredibly adept at gaming the systems so that they can still find and cultivate audiences,” says Becca Lewis, a researcher at Stanford University who studies far-right subcultures online, describing these efforts as a “game of whack-a-mole.” Many white-nationalist accounts have tied their ban to the right-wing narrative that conservatives are being silenced by technology companies. For platforms like DLive, becoming what their users consider “free speech” and “uncensored” alternatives can be lucrative. “More speech also means more money for the platform, and less content moderation means less of an expense,” says Lewis.
The prospect of being pushed off main-stream social-media, video-streaming and payment platforms has also prompted extremists to become more sophisticated about the financial side of the business. While Twitch takes a 50% cut from livestreamers’ earnings and YouTube takes 45%, platforms like DLive allow content creators to keep 90% of what they make. And as many found themselves cut off from mainstream payment services like PayPal, GoFundMe and Patreon, they began to embrace digital currencies.
DLive was founded in December 2017 by Chinese-born and U.S.-educated entrepreneurs Charles Wayn and Cole Chen, who made no secret of their ambition to build a platform that rivaled Twitch. They described the site as a general-interest streaming platform, focused on everything from “e-sports to lifestyle, crypto and news.” But two things set it apart from its competitors: it did not take a cut of the revenue generated by its streamers, and it issued an implicit promise of a less moderated, more permissive space.
DLive’s first big coup came in April 2019 when it announced an exclusive streaming deal with Felix Kjellberg, known as PewDiePie. In just two months, DLive’s total number of users grew by 67%. At the time, Kjellberg was the most popular individual creator on YouTube, with more than 93 million subscribers and his own controversial history. In 2018, he came under fire for making anti-Semitic jokes and racist remarks, and more than 94,000 people signed a Change.org petition to ban his channel from YouTube for being a “platform for white-supremacist content.” The petition noted that “the New Zealand mosque shooter mentioned PewDiePie by name and asked people to subscribe.”
DLive’s community guidelines theoretically prohibit “hate speech that directly attacks a person or group on the basis of race, ethnicity, national origin, religion, disability, disease, age, sexual orientation, gender or gender identity.” But it soon became apparent to both employees and users that executives were willing to ignore venomous content. By early 2019, “political” shows were gaining traction on the site. Those programs devolved into “streams dedicated to white pride and a lot of anti-Semitism, entire streams talking about how Jewish people are evil,” says the former DLive employee who spoke to TIME, adding that moderators acted much more quickly when it came to copyright concerns. “Your stream would be taken down faster for streaming sports than saying you hate Jews.”
The employee recalls raising the matter with Wayn, noting how off-putting it was for new users coming to watch or broadcast streams of popular video games. According to the employee, Wayn explained that the company “didn’t want to get rid of these problematic streamers because they brought in numbers.” The founders knew they had to keep viewers because, as Wayn noted in a 2019 interview, if they wanted to “compete with Twitch on the same level and even take them down one day, DLive needs to match its scale.” Wayn did not respond to multiple requests for comment.
By June 2020, DLive seemed to be openly cultivating a right-of-center audience. On Twitter, it briefly changed its bio to read “All Lives Matter,” a right-wing rallying cry in response to Black Lives Matter. The site has increasingly become a haven for fanaticism, says Joan Donovan, the research director of Harvard’s Shorenstein Center. “Before, on YouTube, some of these people would do a dance with the terms of service,” she tells TIME. “But on DLive, the gloves are off, and it’s just full white-supremacist content with very few caveats.”
On the night of June 29, Fuentes had 56% of the site’s total viewership at 10 p.m., according to the review of the site’s analytics provided to TIME. An additional 39% was viewers of 22 other extremist personalities streaming their commentary. At one point on the night of Aug. 10, just 176 of the more than 15,000 viewers on the top 20 channels on the site were not watching accounts linked to far-right figures. Popular programming in recent months has included alarmist footage of racial-justice protests, antivaccine propaganda, conspiracies linking 5G networks to the spread of COVID-19 and calls to “make more white babies while quarantined.”
The company may be even more reliant on those accounts now. Some users have left the site, complaining publicly about the virulent racism and anti-Semitism spilling over into regular channels and game streams. “DLive is a safe-haven for racists and alt-right streamers,” one user wrote on Twitter on June 22. “Seems to me DLive is the new platform for white supremacists,” wrote another, echoing complaints that it’s a “literal Nazi breeding ground” and “the place where racists don’t get deplatformed.”
The migration of hate speech to far-flung corners of the Internet could make it harder to track, increasing the risk that it spills into the offline world. Experts say law-enforcement and national-security agencies are still unprepared to tackle right-wing extremism. They lack expertise not only in the rapidly evolving technology but also in the ideological ecosystem that has spawned a battery of far-right movements. The recently repackaged white-nationalist youth movement, with new names like “America First” or the “Groypers,” looks more like “gussied-up campus conservatives,” as Friedberg of Harvard’s Shorenstein Center puts it, “so they are not triggering the same warning bells.”
Recent incidents show how this online environment that blends political commentary and hate speech can be dangerous. An 18-year-old accused of firebombing a Delaware Planned Parenthood clinic in January was identified through his Instagram profile, which contained far-right memes reflecting popular beliefs in the young white-nationalist movement, according to BuzzFeed News. In June, Facebook deactivated nearly 200 social-media accounts with ties to white-nationalist groups rallying members to attend Black Lives Matter protests, in some cases armed with weapons.
Analysts who track extremist recruitment online also warn that the pandemic may have long-term effects on young people who are now spending far more time on the Internet. Without the structure of school and social activities, many children and teenagers are spending hours a day in spaces where extremist content lurks alongside games and other benign entertainment, says Dana Coester, an associate professor at West Virginia University who researches the impact of online white extremism on youth in Appalachia. It’s common, she notes, to see teenagers sharing Black Lives Matter messages alongside racist cartoons from popular Instagram accounts targeting middle schoolers. “So many parents I’ve spoken with say their kids are on devices until 3 in the morning,” she says. “I can’t begin to imagine how much damage can be done with kids that many hours a day marinating in really toxic content.”
Analysts warn that both U.S. law enforcement and big technology companies need to move quickly to hire experts who understand this new extremist ecosystem. Experts say the mainstream platforms’ recent purges are reactive: they patch yesterday’s problems instead of preventing future abuses, and focus on high-profile provocateurs instead of the underlying networks.
One solution may be to follow the money, as content creators migrate to new platforms in search of new financial opportunities. “[White supremacists] have become particularly as-siduous at exploiting new methods of fundraising, often seeking out platforms that have not yet realized how extremists can exploit them,” said George Selim, senior vice president of programs of the Anti-Defamation League, in testimony before a House subcommittee in January. “When a new fund-raising method or platform emerges, white supremacists can find a window of opportunity. These windows can, however, be shut if platforms promptly take countermeasures.”
On the evening of Aug. 11, Joe Biden’s pick of Senator Kamala Harris as his running mate dominated the news. “She hates white people,” Fuentes told viewers on DLive. “She is going to use the full weight of the federal government … to destroy conservatives, to destroy America First, anybody that speaks up for white people.” NBC and ABC News–which have a combined 13 million subscribers on YouTube–had an average of 6,100 concurrent viewers watching their coverage. Fuentes’ show had 9,000.
–With reporting by ALEJANDRO DE LA GARZA/NEW YORK
More Must-Reads from TIME
- Donald Trump Is TIME's 2024 Person of the Year
- Why We Chose Trump as Person of the Year
- Is Intermittent Fasting Good or Bad for You?
- The 100 Must-Read Books of 2024
- The 20 Best Christmas TV Episodes
- Column: If Optimism Feels Ridiculous Now, Try Hope
- The Future of Climate Action Is Trade Policy
- Merle Bombardieri Is Helping People Make the Baby Decision
Write to Vera Bergengruen at vera.bergengruen@time.com