• Ideas

Social Media Platforms Claim Moderation Will Reduce Harassment, Disinformation and Conspiracies. It Won’t

7 minute read
Ideas
McNamee began his Silicon Valley investing career in 1982. He was an early investor in Facebook and an advisor to Mark Zuckerberg. He is the author of the New York Times bestseller Zucked: Waking up to the Facebook Catastrophe.  

If the United States wants to protect democracy and public health, it must acknowledge that internet platforms are causing great harm and accept that executives like Mark Zuckerberg are not sincere in their promises to do better. The “solutions” Facebook and others have proposed will not work. They are meant to distract us.

The news in the last weeks highlighted both the good and bad of platforms like Facebook and Twitter. The good: Graphic videos of police brutality from multiple cities transformed public sentiment about race, creating a potential movement for addressing an issue that has plagued the country since its founding. Peaceful protesters leveraged social platforms to get their message across, outcompeting the minority that advocated for violent tactics. The bad: waves of disinformation from politicians, police departments, Fox News, and others denied the reality of police brutality, overstated the role of looters in protests, and warned of busloads of antifa radicals. Only a month ago, critics exposed the role of internet platforms in undermining the country’s response to the COVID-19 pandemic by amplifying health disinformation. That disinformation convinced millions that face masks and social distancing were culture war issues, rather than public health guidance that would enable the economy to reopen safely.

The internet platforms have worked hard to minimize the perception of harm from their business. When faced with a challenge that they cannot deny or deflect, their response is always an apology and a promise to do better. In the case of Facebook, University of North Carolina Scholar Zeynep Tufekci coined the term “Zuckerberg’s 14-year apology tour.” If challenged to offer a roadmap, tech CEOs leverage the opaque nature of their platforms to create the illusion of progress, while minimizing the impact of the proposed solution on business practices. Despite many disclosures of harm, beginning with their role in undermining the integrity of the 2016 election, these platforms continue to be successful at framing the issues in a favorable light.

When pressured to reduce targeted harassment, disinformation, and conspiracy theories, the platforms frame the solution in terms of content moderation, implying there are no other options. Despite several waves of loudly promoted investments in artificial intelligence and human moderators, no platform has been successful at limiting the harm from third party content. When faced with public pressure to remove harmful content, internet platforms refuse to address root causes, which means old problems never go away, even as new ones develop. For example, banning Alex Jones removed conspiracy theories from the major sites, but did nothing to stop the flood of similar content from other people.

The platforms respond to each new public relations challenge with an apology, another promise, and sometimes an increased investment in moderation. They have done it so many times I have lost track. And yet, policy makers and journalists continue to largely let them get away with it.

We need to recognize that internet platforms are experts in human attention. They know how to distract us. They know we will eventually get bored and move on.

Despite copious evidence to the contrary, too many policy makers and journalists behave as if internet platforms will eventually reduce the harm from targeted harassment, disinformation, and conspiracies through content moderation. There are three reasons why it will not do so: scale, latency, and intent. These platforms are huge. In the most recent quarter, Facebook reported that 1.7 billion people use its main platform every day and roughly 2.3 billion across its four large platforms. They do not disclose the numbers of messages posted each day, but it is likely to be in the hundreds of millions, if not a billion or more, just on Facebook. Substantial investments in artificial intelligence and human moderators cannot prevent millions of harmful messages from getting through.

The second hurdle is latency, which describes the time it takes for moderation to identify and remove a harmful message. AI works rapidly, but humans can take minutes or days. This means a large number of messages will circulate for some time before eventually being removed. Harm will occur in that interval. It is tempting to imagine that AI can solve everything, but that is a long way off. AI systems are built on data sets from older systems, and they are not yet capable of interpreting nuanced content like hate speech.

The final – and most important – obstacle for content moderation is intent. The sad truth is that the content we have asked internet platforms to remove is exceptionally valuable and they do not want to remove it. As a result, the rules for AI and human moderators are designed to approve as much content as possible. Alone among the three issues with moderation, intent can only be addressed with regulation.

A permissive approach to content has two huge benefits for platforms: profits and power. The business model of internet platforms like Facebook, Instagram, YouTube, and Twitter is based on advertising, the value of which depends on consumer attention. Where traditional media properties create content for mass audiences, internet platforms optimize content for each user individually, using surveillance to enable exceptionally precise targeting. Advertisers are addicted to the precision and convenience offered by internet platforms. Every year, they shift an ever larger percentage of their spending to them, from which platforms derive massive profits and wealth. Limiting the amplification of targeted harassment, disinformation, and conspiracy theories would lower engagement and revenues.

Power, in the form of political influence, is an essential component of success for the largest internet platforms. They are ubiquitous, which makes them vulnerable to politics. Tight alignment with the powerful ensures success in every country, which leads platforms to support authoritarians, including ones who violate human rights. For example, Facebook has enabled regime-aligned genocide in Myanmar and state-sponsored repression in Cambodia and the Philippines. In the United States, Facebook and other platforms have ignored or altered their terms of service to enable Trump and his allies to use the platform in ways that would normally be prohibited. For example, when journalists exposed Trump campaign ads that violated Facebook’s terms of service with falsehoods, Facebook changed its terms of service, rather than pulling the ads. In addition, Facebook chose not to follow Twitter’s lead in placing a public safety warning on a Trump post that promised violence in the event of looting.

Thanks to their exceptional targeting, platforms play an essential role in campaign fundraising and communications for candidates of both parties. While the dollars are not meaningful to the platforms, they derive power and influence from playing an essential role in electoral politics. This is particularly true for Facebook.

At present, platforms have no liability for the harms caused by their business model. Their algorithms will continue to amplify harmful content until there is an economic incentive to do otherwise. The solution is for Congress to change incentives by implementing an exception to the safe harbor of Section 230 of the Communications Decency Act for algorithm amplification of harmful content and guaranteeing a right to litigate against platforms for this harm. This solution does not impinge on first amendment rights, as platforms are free to continue their existing business practices, except with liability for harms.

Thanks to COVID-19 and the protest marches, consumers and policy makers are far more aware of the role that internet platforms play in amplifying disinformation. For the first time in a generation, there is support in both parties in Congress for revisions to Section 230. There is increasing public support for regulation.

We do not need to accept disinformation as the cost of access to internet platforms. Harmful amplification is the result of business choices that can be changed. It is up to us and to our elected representatives to make that happen. The pandemic and the social justice protests underscore the urgency of doing so.

More Must-Reads From TIME

Contact us at letters@time.com

TIME Ideas hosts the world's leading voices, providing commentary on events in news, society, and culture. We welcome outside contributions. Opinions expressed do not necessarily reflect the views of TIME editors.