Why the Influence Campaign Facebook Just Uncovered Is a ‘Canary in the Coal Mine’

6 minute read

On Tuesday, Facebook acknowledged what many already feared: there appears to be evidence of a political influence campaign on the platform ahead of the 2018 midterm elections. The company did not specify who is behind the “coordinated inauthentic behavior” that led Facebook to remove 32 pages and accounts.

But company officials did mention the Russia-based Internet Research Agency in a post about the takedown, saying that the culprits are using more sophisticated tactics than the IRA has in the past.

The Atlantic Council’s Digital Forensic Research Lab, an organization that has partnered with Facebook to analyze abuse on the platform, was given a selection of the pages 24 hours before the takedown. Their disinformation experts were a bit more pointed in their initial analysis, saying an “initial scan shows behavioral patterns and use of language reminiscent of the troll operations run from Russia in 2014 through 2017.” Chief among the tactics they identified: sowing discord among the American electorate.

After unequivocally blaming Russia for meddling in the 2016 presidential elections, Homeland Security Secretary Kirstjen Nielsen said on Tuesday that American democracy is “in the crosshairs” of foreign influence campaigns. Though this particular campaign appeared to be limited in scope, reaching hundreds of thousands of users rather than millions, DFRLab director Graham Brookie calls it a “canary in the coal mine.”

It suggests that disinformation agents, who covered their tracks in this case by using tools like VPNs, are adapting to steps that companies like Facebook have taken to root them out in the past. One page Facebook removed on Tuesday proved capable of drawing people into the streets, a dangerous tactic Russia-linked pages have previously used. And the fact that 32 pages and accounts can be so problematic is a reminder of just hard it is for social media companies to identify, much less eradicate, bad behavior across their sprawling platforms.

Under withering criticism for allowing disinformation to spread, as well as dealing with other headwinds, Facebook recently posted the biggest single-company loss in U.S. market history, shedding more than $120 billion in value in one day. While the company has been funneling resources toward addressing problems related to content — hiring security experts and thousands of reviewers, as well as partnering with outside organizations like the DFRLab — CEO Mark Zuckerberg and other officials have repeatedly described this as an “arms race” against constantly evolving adversaries.

The sheer volume of activity on Facebook is part of the problem. Small networks of well-coordinated users can get lost amidst all the problematic content the company is trying to review each day. Among billions of new posts are conspiracy theories and hoaxes, clickbait and hate speech, some of which bad actors spread on purpose and some of which average users spread unwittingly. And it’s all happening in various languages and countries that each have their own cultural context.

It’s also about money. In the post released Tuesday, announcing that eight pages, 17 Facebook profiles and seven Instagram accounts had been identified as part of the campaign, the company described their adversaries as “well-funded.” But it is much cheaper to spread “fake news” than it is for the company to stop it.

Facebook has invested heavily in improving technology to help sift through the mess, knowing that machines have to be part of the solution. Algorithms are fairly good at doing things like spotting fake accounts, which platforms are taking down in droves. Twitter is now catching about 8 million accounts per week that are “challenged,” meaning that they fail to pass basic tests of authenticity like providing a phone number or checking boxes that affirm the user is “not a robot.” And earlier this year, Facebook announced that it had disabled 583 million fake accounts in the first quarter of 2018 alone.

But machines are not very good at making sophisticated judgment calls, something this latest campaign required. “One thing that makes disinformation such a hard challenge to tackle is that it’s designed to insert itself into genuine engagement,” Brookie says. Bad actors often cook up their campaigns with a mix of truth, sensation and false context that requires human analysis to spot.

One page Facebook shut down on Tuesday was titled “Resisters.” Though the event has now been disabled, the page had created a rally in August that was supposedly aimed at protesting a white supremacist gathering in Washington, D.C., on the same day, and the administrators had enlisted legitimate pages to help popularize it. By Tuesday, 2,600 users had expressed interest in attending and 600 had said they planned to go. As the DFRLab noted, the event was designed to tap into anger from liberals who view themselves as part of the “resistance” to Trump and his policies; on its face, there’s nothing to clearly distinguish it from countless other events channeling the same momentum.

“Promoting such events is not, in itself, indicative of malicious activity,” the DFRLab wrote. “The identification of this account as inauthentic rests primarily on Facebook’s own assessment, based on the full range of data, which only Facebook had access to.”

Facebook has said the company will have hired more than 20,000 people to work in security and content review by the end of the year. Yet the platform now has more than 2 billion active monthly users. In a June interview with TIME, Ben Nimmo, an expert in Russian disinformation who works for the DFRLab, noted this would break down to some 100,000 Facebook pages per employee. “You can’t humanly moderate that,” he said.

The social media company, along with the likes of Twitter and Google, has acknowledged that it can’t solve the crisis on its own. In the post released Tuesday, the company said it was working with U.S. law enforcement agencies as well as other tech companies to “better understand the threats we face.”

Government officials also vowed on Tuesday to take serious action to protect America’s elections. “Any attempt to interfere in our elections is an affront to our democracy and it will not be allowed,” Vice President Mike Pence said at a cybersecurity summit in New York. “The United States will not tolerate any foreign interference in our elections from any nation-state – not from Russia, China, Iran, North Korea, or anyone else.”

“We can expose malign and fraudulent voices when they seek to undermine confidence in our democracy — and this we will do,” he added.

But as the information ecosystem becomes more unwieldy, individual users will also have to be part of the solution, alongside tech platforms, the government and the media. Social media users must be skeptical of the content they encounter on social media, Brookie says, without veering into cynicism about every rally they see advertised. “There’s a ton of organic, very real political activism in the United States, and if a single case of disinformation is able to poison that well,” he says, “then even a very small operation can have an outsized impact.”

With reporting by W.J. Hennigan/Washington

More Must-Reads From TIME

Contact us at letters@time.com