I am really sad about Facebook.
I got involved with the company more than a decade ago and have taken great pride and joy in the company’s success … until the past few months. Now I am disappointed. I am embarrassed. I am ashamed. With more than 1.7 billion members, Facebook is among the most influential businesses in the world. Whether they like it or not–whether Facebook is a technology company or a media company–the company has a huge impact on politics and social welfare. Every decision that management makes can matter to the lives of real people. Management is responsible for every action. Just as they get credit for every success, they need to be held accountable for failures. Recently, Facebook has done some things that are truly horrible, and I can no longer excuse its behavior.
Nine days before the November 2016 election, I sent the email above to Facebook founder Mark Zuckerberg and chief operating officer Sheryl Sandberg. It was the text for an op-ed I was planning to publish about problems I was seeing on Facebook. Earlier in the year, I noticed a surge of disturbing images, shared by friends, that originated on Facebook Groups ostensibly associated with the Bernie Sanders campaign, but it was impossible to imagine they came from his campaign. I wanted to share with Sandberg and Zuckerberg my fear that bad actors were exploiting Facebook’s architecture and business model to inflict harm on innocent people.
I am a longtime tech investor and evangelist. Tech has been my career and my passion. I had been an early adviser to Zuckerberg–Zuck, to many colleagues and friends–and an early investor in Facebook. I had been a true believer for a decade. My early meetings with Zuck almost always occurred in his office, generally just the two of us, so I had an incomplete picture of the man, but he was always straight with me. I liked Zuck. I liked his team. I was a fan of Facebook. I was one of the people he would call on when confronted with new or challenging issues. Mentoring is fun for me, and Zuck could not have been a better mentee. We talked about stuff that was important to Zuck, where I had useful experience. More often than not, he acted on my counsel.
When I sent that email to Zuck and Sheryl, I assumed that Facebook was a victim. What I learned in the months that followed–about the 2016 election, about the spread of Brexit lies, about data on users being sold to other groups–shocked and disappointed me. It took me a very long time to accept that success had blinded Zuck and Sheryl to the consequences of their actions. I have never had a reason to bite Facebook’s hand. Even at this writing, I still own shares in Facebook. My criticism of the company is a matter of principle, and owning shares is a good way to make that point. I became an activist because I was among the first to see a catastrophe unfolding, and my history with the company made me a credible voice.
This is a story of my journey. It is a story about power. About privilege. About trust, and how it can be abused.
The massive success of Facebook eventually led to catastrophe. The business model depends on advertising, which in turn depends on manipulating the attention of users so they see more ads. One of the best ways to manipulate attention is to appeal to outrage and fear, emotions that increase engagement. Facebook’s algorithms give users what they want, so each person’s News Feed becomes a unique reality, a filter bubble that creates the illusion that most people the user knows believe the same things. Showing users only posts they agree with was good for Facebook’s bottom line, but some research showed it also increased polarization and, as we learned, harmed democracy.
To feed its AI and algorithms, Facebook gathered data anywhere it could. Before long, Facebook was spying on everyone, including people who do not use Facebook. Unfortunately for users, Facebook failed to safeguard that data. Facebook sometimes traded the data to get better business deals. These things increased user count and time on-site, but it took another innovation to make Facebook’s advertising business a giant success.
From late 2012 to 2017, Facebook perfected a new idea–growth hacking–where it experimented constantly with algorithms, new data types and small changes in design, measuring everything. Growth hacking enabled Facebook to monetize its oceans of data so effectively that growth-hacking metrics blocked out all other considerations. In the world of growth hacking, users are a metric, not people. Every action a user took gave Facebook a better understanding of that user–and of that user’s friends–enabling the company to make tiny “improvements” in the user experience every day, which is to say it got better at manipulating the attention of users. Any advertiser could buy access to that attention. The Russians took full advantage. If civic responsibility ever came up in Facebook’s internal conversations, I can see no evidence of it.
The people at Facebook live in their own bubble. Zuck has always believed that connecting everyone on earth was a mission so important that it justified any action necessary to accomplish it. Convinced of the nobility of their mission, Zuck and his employees seem to listen to criticism without changing their behavior. They respond to nearly every problem with the same approach that created the problem in the first place: more AI, more code, more short-term fixes. They do not do this because they are bad people. They do this because success has warped their perception of reality. They cannot imagine that the recent problems could be in any way linked to their designs or business decisions. It would never occur to them to listen to critics–How many billion people have the critics connected?–much less to reconsider the way they do business. As a result, when confronted with evidence that disinformation and fake news had spread over Facebook and may have influenced a British referendum or an election in the U.S., Facebook followed a playbook it had run since its founding: deny, delay, deflect, dissemble. Facebook only came clean when forced to, and revealed as little information as possible. Then it went to Plan B: apologize, and promise to do better.
Thanks to Facebook’s extraordinary success, Zuck’s brand in the tech world combines elements of rock star and cult leader. He is deeply committed to products and not as interested in the rest of the business, which he leaves to Sandberg. According to multiple reports, Zuck is known for micromanaging products and for being decisive. He is the undisputed boss. Zuck’s subordinates study him and have evolved techniques for influencing him. Sheryl Sandberg is brilliant, ambitious and supremely well organized. Given Zuck’s status as the founder, the team at Facebook rarely, if ever, challenged him on the way up and did not do so when bad times arrived. (A Facebook spokesperson replies: “People disagree with Mark all the time.”)
You would think that Facebook’s users would be outraged by the way the platform has been used to undermine democracy, human rights, privacy, public health and innovation. Some are, but nearly 1.5 billion people use Facebook every day. They use it to stay in touch with distant relatives and friends. They like to share their photos and their thoughts. They do not want to believe that the same platform that has become a powerful habit is also responsible for so much harm. Facebook has leveraged our trust of family and friends to build one of the most valuable businesses in the world, but in the process, it has been careless with user data and aggravated the flaws in our democracy while leaving citizens ever less capable of thinking for themselves, knowing whom to trust or acting in their own interest. Bad actors have had a field day exploiting Facebook and Google, leveraging user trust to spread disinformation and hate speech, to suppress voting and to polarize citizens in many countries. They will continue to do so until we, in our role as citizens, reclaim our right to self-determination.
We need to begin to reform Facebook and Big Tech in these key areas:
Democracy depends on shared facts and values. It depends on deliberation and the rule of law. It depends on having a free press and other countervailing forces to hold the powerful accountable. Facebook (along with Google and Twitter) has undercut the free press from two directions: it has eroded the economics of journalism and then overwhelmed it with disinformation. On Facebook, information and disinformation look the same; the only difference is that disinformation generates more revenue, so it gets better treatment. To Facebook, facts are not an absolute; they are a choice to be left initially to users and their friends but then magnified by algorithms to promote engagement. In the same vein, Facebook’s algorithms promote extreme messages over neutral ones, which can elevate disinformation over information, conspiracy theories over facts. Like-minded people can share their views, but they can also block out any fact or perspective with which they disagree.
At Facebook’s scale–or Google’s–there is no way to avoid influencing the lives of users and the future of nations. Recent history suggests that the threat to democracy is real. The efforts to date by Facebook, Google and Twitter to protect future elections may be sincere, but there is no reason to think they will do anything more than start a game of whack-a-mole with those who choose to interfere. Only fundamental changes to business models can reduce the risk to democracy. Facebook remains a threat to the powerless around the world. The company’s Free Basics service has brought Internet service to poor people in roughly 60 countries, but at the cost of massive social disruption. Lack of language skills and cultural insensitivity have blinded Facebook to the ways in which its platform can be used to harm defenseless minorities. This has already played out with deadly outcomes in Sri Lanka and Myanmar.
Facebook remains a threat to privacy. The company’s commitment to surveillance would make an intelligence agency proud, but not so its handling of data. There need to be versions of Facebook News Feed and all search results that are free of manipulation. Users need to own their data and have absolute control over how it gets used. Users have a right to know the name of every organization and person who has their data. This would apply not just to the platforms but also to cellular carriers and the third parties that gain access to user data. Another important regulatory opportunity is data portability, such that users can move everything of value from one platform to another. This would help enable startups to overcome an otherwise insurmountable barrier to adoption. Platforms should also be transparent to users, advertisers and regulators.
Control your data
Users should always own all their own data and metadata–and they should be compensated much better for it. No one should be able to use a user’s data in any way without explicit, prior consent. Third-party audits of algorithms, comparable to what exists now for financial statements, would create the transparency necessary to limit undesirable consequences. There should be limits on what kind of data can be collected, such that users can limit data collection or choose privacy. This needs to be done immediately, before new products like Alexa and Google Home reach mass adoption. Smart home devices are currently an untamed frontier of data, with the potential for massive abuse. Lastly, I would like to prevent deployment of advanced technologies like artificial intelligence and automated bots without proof that they serve humans rather than exploit them. This could be accomplished through an equivalent to the Food and Drug Administration (FDA) for technology.
If I consider Google, Amazon and Facebook purely in investment terms, I cannot help but be impressed by the brilliant way they have executed their business plans. The problem is unintended consequences, which are more numerous and severe than we can afford. Google and Facebook are artificially profitable because they do not pay for the damage they cause.
The U.S. economy has historically depended on startups far more than other economies, especially in technology. If my hypothesis is correct, the country has begun a risky experiment in depending on monopolists for innovation, economic growth and job creation.
Google, Amazon and Facebook have followed the monopolist’s playbook and built “no go” zones around their core operations. Their success has raised the bar for startups, narrowing the opportunities for outsize success and forcing entrepreneurs to sell out early or pursue opportunities with less potential. They have built additional walls through the acquisition of startups that might have posed a competitive threat. These companies do not need to choke off startup activities to be successful, but they cannot help themselves. That is what monopolists do.
In terms of economic policy, I want to set limits on the markets in which monopoly-class players like Facebook, Google and Amazon can operate. The economy would benefit from breaking them up. A first step would be to prevent acquisitions, as well as cross subsidies and data sharing among products within each platform. I favor regulation as a way to reduce harmful behavior. The most effective regulations will force changes in business models.
Relative to today’s standards, these recommendations sound extreme, but there may be no other way to protect children, adults, democracy and the economy. Our parents and grandparents had a similar day of reckoning with tobacco. Now it’s our turn, this time with Internet platforms.
Make it human
From a technology perspective, the most promising path forward is through innovation, something over which the platforms have too much influence today. Antitrust enforcement can create space for innovation, but we need more. I propose that Silicon Valley embrace human-driven technology as the Next Big Thing. In America, if you want to solve a problem, it helps to incorporate the profit motive, which we can do by shifting the focus of technology from exploiting the weakest links in human psychology to a commitment to empowering users.
What would human-driven technology look like? It would empower users rather than exploit them. Human-driven social networks would enable sharing with friends, but without massive surveillance, filter bubbles and data insecurity. In exchange for adopting a benign business model, perhaps based on subscriptions, startups would receive protection from the giants. Given that social media is practically a public utility, I think it is worth considering more aggressive strategies, including government subsidies. The government already subsidizes energy exploration, agriculture and other economic activities that the country considers to be a priority, and it is not crazy to imagine that civically responsible social media may be essential to the future of the country. The subsidies might come in the form of research funding, capital for startups, tax breaks and the like.
The Next Big Thing offers opportunities to rethink the architecture of the Internet. For example, I would like to address privacy with a new model of authentication for website access that permits websites to gather only the minimum amount of data required for each transaction. It would work like a password manager, but with a couple of important distinctions: it would go beyond storing passwords to performing log-ins, and it would store private data on the device, not in the cloud. Apple has embraced this model, offering its customers valuable privacy and security advantages over Android.
All the public-health threats of Internet platforms derive from design choices. Technology has the power to persuade, and the financial incentives of advertising business models guarantee that persuasion will always be the default goal of every design. Every pixel on every screen of every Internet app has been tuned to influence users’ behavior. Not every user can be influenced all the time, but nearly all users can be influenced some of the time. In the most extreme cases, users develop behavioral addictions that can lower their quality of life and that of family members, co-workers and close friends. We don’t know the prevalence of behavioral addictions to the Internet, but anecdotally they seem widespread. Millions of people check their phone first thing in the morning. For most, the big question is whether they do so before they pee or while they are peeing. Far too many people report difficulty sleeping because they cannot stop using their phone or tablet. It is possible that most of Facebook’s daily users have some level of behavioral addiction. The problem with addiction is that it deprives the victim of agency. Even when an addict understands the potential for harm, he or she cannot help but continue the activity. To change that will require more than regulation. It will require an investment in research and public-health services to counter Internet addiction.
A growing percentage of children prefer the hyperstimulation of virtual experiences to the real world. Products like Instagram empower bullies. Texting has replaced conversation for many kids. It’s hard to know how this will turn out, but some medical researchers have raised alarms, noting that we have allowed unsupervised psychological experiments on millions of people. Medical research bolsters the case for regulation. In addition to limits on the ages at which children may use screens like smartphones and tablets, there is evidence that phones and computers can cause distraction in classrooms.
The harm to public health, democracy, privacy and competition caused by Facebook and other platforms results from their business models, which must be changed. As users, we have more power to force change than we realize. We can alter our behavior. We can create a political movement. We can insist on government intervention to promote human-driven technology as an alternative to extractive technology. At the same time, the government must take steps to repair the damage from Internet platforms. We need to rebuild institutions, find common ground with those with whom we disagree, and start acting like one country again. The political and social power of Facebook and the other Internet platforms is unhealthy and inappropriate in a democracy like ours. We must hold them accountable and insist on real-world solutions, not more code.
Editor’s note: Invited to respond, Facebook pointed to a 2018 post by Sheryl Sandberg detailing the company’s efforts on user privacy, democracy, security and other issues. It can be found at time.com/sandberg-response
McNamee has been a Silicon Valley investor for 35 years. His most recent fund, Elevation, included U2’s Bono as a co-founder. This piece is adapted from his new book Zucked: Waking Up to the Facebook Catastrophe by Roger McNamee. Published by arrangement with Penguin Press, a member of Penguin Random House LLC.
This appears in the January 28, 2019 issue of TIME.
- The Fight to Save the Salmon
- Inside the World of Black Bitcoin, Where Crypto Is About Making More Than Just Money
- The 'Great Resignation' Is Finally Getting Companies to Take Burnout Seriously. Is It Enough?
- Suddenly, Everyone on TV Is Very Rich or Very Poor. What Happened?
- Colin Powell Reflects on His Mistakes in Unpublished TIME Interview
- Business Travel's Demise Could Have Far-Reaching Consequences
- If the U.S. Spends Big on Climate, the Rest of the World Might Follow