Depending on whom you believe, the problem of fake news on Facebook is either one of the most important issues facing mankind, or an over-blown controversy pumped up by the mainstream media. And in a way, that dichotomy itself points out the problem with defining—let alone actually getting rid of—”fake news.”
When someone uses that term, they could be referring to one of a number of different things: It might be a story about how Bill and Hillary Clinton murdered several top-level Washington insiders, or it might be one about how Donald Trump’s chief adviser is a neo-Nazi, or it might be one about how the most important election issue was Clinton’s emails.
The first of these is relatively easy to disprove just by using facts. The second is somewhat more difficult to rebut, since a lot of it is based on innuendo or implication. And the third is almost impossible to respond to because it is pure opinion.
As John Herrman argued in a recent New York Times article, part of the difficulty in solving the “fake news” problem stems from the fact that many people appear to have lost faith in the existing media. Therefore, much of the fact-checking and analysis that newspapers and others have done on Donald Trump wound up being largely irrelevant.
This is part of the reason why the problem can’t be solved by appointing an “executive editor” for Facebook, as Washington Post media writer (and former New York Times public editor) Margaret Sullivan recommended in a recent column.
A large part of the reason why people get their news from Facebook in the first place, or from alternative sources like Breitbart News or dozens of others sites, is that they don’t really trust mainstream outlets. Many seem to see them as gatekeepers, who pretend to know what the “real” news is. So replacing one gatekeeper with another isn’t likely to work.
Is there anything out there that could provide a model for how Facebook and the rest of the media could approach this problem? In a way, Mark Zuckerberg came close to a potential solution in his recent blog post, in which he talked about working with third-party verification services.
If Facebook itself starts to label or hide “fake news,” based on some kind of algorithmic filtering or quality measure, there are going to be inevitable accusations that the social network is deciding what people should read or believe. And the same problem would occur if the company hired a journalist, or even a number of journalists, to make those decisions. It also wouldn’t scale.
Facebook could—and in fact, says it will—make it easier for users themselves to identify fake news, but then you are back to the problem of defining what constitutes “fake.”
The best approach to this conundrum, I think, is to take advantage of the principles that make the Internet so powerful as a form of networked media (powerful in both a positive and a negative sense, it must be admitted). And that is to not just have one public editor or verification node, but to have thousands, or even tens of thousands of them.
There is an existing entity that takes this approach, and it’s called Wikipedia. It has a number of flaws, and it is rightly criticized for them, but it is also the best model we have when it comes to user-contributed information flow.
The success of the crowdsourced encyclopedia, which turned 15 this year, once seemed so unlikely that it’s hard to believe it has become as dominant as it has. It was widely attacked for being wrong, and it still makes mistakes. But the amazing thing is just how often it is right, and how often it manages to overcome the kind of chaos that such efforts often produce.
There are many criticisms of the Wikipedia model for being dominated by a small cabal of largely white and male editors, and that’s definitely something that needs work. But you could say the same about much of the existing media.
It’s worth noting that fake news isn’t something that is solely Facebook’s fault. It gets the lion’s share of the blame because it is the largest network by far when it comes to distributing that kind of content. Not only that, but it has designed the service so that the emotional value of sharing something—even if it is fake—outweighs any negative value that comes from it not being true. It has weaponized human nature.
But the reality is that the fragmented, atomized, and networked media environment we exist in now is the biggest culprit, and that’s not something we can fix—and maybe not even something we should.
Take a look at the recent analysis by the New York Times of a single fake news story: A man in Texas posts a photo of a bus on Twitter and says he thinks it is carrying paid protesters to an anti-Trump rally. Despite having just 40 followers, his tweet is picked up and posted on Reddit, then on alt-right websites, and eventually it is everywhere.
This is much harder to stamp out than the deliberate, orchestrated creation of fake news stories in online hoax factories, like the ones described by BuzzFeed and the Washington Post. The “paid protester” story from the Texas businessman wasn’t deliberate, and yet it still swept through the news ecosystem.
Should people who retweeted the paid protester story have fact-checked it? Probably. But that’s probably never going to happen. Should Reddit users or even the site itself have fact-checked it? Perhaps. But that’s probably not going to happen either.
If Facebook could somehow either tap into or recreate the kind of networked fact checking that Wikipedia does on a daily basis, using existing elements like the websites of Politifact and others, it might actually go some distance towards being a possible solution.
It’s probably never going to solve the problem of people posting or re-posting news and opinion stories because they reinforce their existing biases, and make them feel as though they are part of a specific team. But to solve a networked-media problem, a networked-media solution is about the only response that has a hope of working.
This article originally appeared on Fortune.com
More Must-Reads from TIME
- Where Trump 2.0 Will Differ From 1.0
- How Elon Musk Became a Kingmaker
- The Power—And Limits—of Peer Support
- The 100 Must-Read Books of 2024
- Column: If Optimism Feels Ridiculous Now, Try Hope
- The Future of Climate Action Is Trade Policy
- FX’s Say Nothing Is the Must-Watch Political Thriller of 2024
- Merle Bombardieri Is Helping People Make the Baby Decision
Contact us at letters@time.com