We are in the midst of a “tech-lash.” For months, the leading Internet companies have faced a wave of criticism sparked by revelations that they unwittingly enabled the spread of Russian disinformation that distorted the 2016 election. They are now beginning to listen. Recently, Facebook responded when chief executive Mark Zuckerberg announced that his company is revamping its flagship News Feed service: The algorithm powering it will now prioritize content shared by your friends and family over news stories and viral videos. The company followed up by announcing it will survey users and potentially relegate untrusted outlets.
The overhaul marks the first major action by any Silicon Valley giant that may curb the plague of political disinformation. It almost certainly will not be enough.
While Facebook’s intentions are laudable, their reach may exceed their grasp. The purveyors of disinformation may indeed need to change their approaches to spreading mendacious or otherwise deceitful content over social media. This is nothing new. News outlets, commercial advertisers and the like have long needed to monitor subtle tweaks of both news feed and search engine optimization algorithms to maximize their page views. Disinformation propagators will respond similarly. They have months to master these changes so that they can channel targeted propaganda and misinformation at individual voters during high political season later this year.
The latent power of disinformation operations conducted over the leading Internet platforms lies in the implicit alignment of interests between platform (that is, companies like Facebook and Google) and advertiser (which is how those companies make much of their money). The platform collects data about its users, organizes them into like-minded audiences with shared preferences and sells those groups’ aggregated attention to advertisers. If users engage with the commercial message, both the advertiser and the platform benefit — including if the advertiser is a propagandist.
This market paradigm encourages a subtle and unwitting alignment: These sites sustain themselves by finding like-minded groups and selling information about their behavior; disinformation propagators sustain themselves by manipulating the behavior of like-minded groups. Until this system is restructured, it is unlikely political disinformation operations can be stopped or even slowed. That rebuilding would be enormously difficult, since digital advertising is absolutely central to Internet commerce. But it is essential.
Furthermore, tomorrow’s disinformation campaigns will not be limited to Facebook, Google and Twitter. There is a large commercial web ecosystem dedicated to using behavioral data to deploy persuasive messages. As we discuss in recently published analysis, disinformation campaigns will use the tools of successful digital advertisers across every available distribution channel. As such, a serious effort to undercut these operations must address the entire market.
The market begins with data analytics. Disinformation campaigns rely heavily on behavioral data tracking — the widespread practice of logging your personal web browsing habits, location data, purchasing patterns and more. (For instance, in the time since you downloaded this article using your desktop browser or mobile phone, your information has likely already been shared with dozens of online firms.) Wherever possible, this data is associated with personal identifiers — say, an email address and phone number — that then connect you to other bits of information collected elsewhere online. Insights and inferences drawn from your behavioral data are then either sold or shared — by large Internet platform companies, digital advertising firms, data brokers and online services, among many others — with all kinds of advertisers. This includes disinformation operators, who often appear as legitimate entities to the firms in this ecosystem. Because of their shifting online identities and vast number, it is very difficult to detect their activity, despite the advanced algorithmic technologies meant to find them.
This data helps create the community of like-minded people that then grows over time through the messaging and distribution of thousands upon thousands of targeted social media posts, advertisements, promotions and click-throughs. Timely search engine optimization tactics can help push a fake news story to the top of the Google results for an hour, a day or an entire news cycle, in the process misinforming a great many Internet users. And potent social media management software combines all of these services into an integrated system that coordinates data collection, audience formation and message-testing across multiple channels in real time, thus enabling them to determine how to target you with specific messages with tremendous speed and efficiency.
Underlying all of these tools is a technology with the potential to super-charge them: artificial intelligence. As AI is increasingly woven into the consumer-facing web, more and more content will be curated and presented by a machine. Should disinformation propagators harness AI, they will mar and adulterate our political culture and discourse with super-human power. Early versions of web-based AI technology already help create digital filter bubbles, escalate nonveracious content to the top of search results and power viral online hoaxes and noxious hate speech. To counter these trends, the companies try to feed the algorithms with instructions to spot and limit negative content. For example, services like YouTube continue to onboard more and more human reviewers to help identify, label and curate policy-violating content including extremist videos. But the scale of content is simply too large for comprehensive human review, and the vast majority of disinformation would not be taken down anyway because it is perfectly legal despite poisoning to our politics. We will not delete our way out of this problem.
Facebook’s intervention to fix the News Feed is important — regardless of whether it works — because it signals a recognition that the relationship between media and democracy is in crisis. We must build upon it. Democracies function poorly if citizens are ill-informed and cannot participate ably in self-government. We rely at our peril on a news marketplace that is designed to serve the advertiser rather than the citizen. Nothing less than the national political integrity is at stake — and America’s corporate and public sectors must come together to rebuild the Internet as we know it, in order to bring an end to the scourge of disinformation.
More Must-Reads from TIME
- Donald Trump Is TIME's 2024 Person of the Year
- Why We Chose Trump as Person of the Year
- Is Intermittent Fasting Good or Bad for You?
- The 100 Must-Read Books of 2024
- The 20 Best Christmas TV Episodes
- Column: If Optimism Feels Ridiculous Now, Try Hope
- The Future of Climate Action Is Trade Policy
- Merle Bombardieri Is Helping People Make the Baby Decision
Contact us at letters@time.com