By Arun Sundararajan
April 5, 2018
IDEAS
Sundararajan is a professor of business at New York University and the author of The Sharing Economy: The End of Employment and the Rise of Crowd-Based Capitalism.

We may never know exactly what caused Nasim Aghdam to take the drastic steps of arming herself with a handgun, seeking out YouTube’s San Bruno, Calif., headquarters, and shooting three innocent employees before taking her own life. Snippets of evidence have emerged that point to varied reasons for Aghdam’s anger toward the Internet giant: her website (deactivated following the shooting) complaining about her YouTube content being demonetized; a video she posted on YouTube (since removed) expressing fury about inconsistencies in how YouTube censored her workout videos; a picture of her holding a banner protesting YouTube’s “dictatorship.”

It may be tempting to chalk the incident up as yet another random U.S. workplace shooting, or to conclude that a growing national sense of unease about the censorship and privacy policies of digital giants like Facebook and Google has reached a boiling point. But this tragedy to some degree reflects a new and different kind of anguish, with a world in which faceless and opaque algorithms replace more familiar physical world institutions and bosses, not just as our censors but as our paths to opportunity. As the economic and societal power of these algorithms expands, it is essential that the companies in control prove worthy of the public trust that we place in them.

YouTube represents one of the earliest examples of crowd-based capitalism: a new way of organizing economic activity wherein a digital platform, a hybrid of sorts between the traditional corporation and the invisible hand of the marketplace, gathers customers and connects them to a distributed and heterogeneous crowd of on-demand talent, individual entrepreneurs and small businesses. We’ve seen examples of these platforms emerge in numerous industries over the past decade, ranging from sharing-economy giants like Lyft and Uber for transportation and Airbnb for short-term accommodation to talent marketplaces like UpCounsel for legal services and Catalant for management consulting.

No doubt, many of these platforms have expanded and equalized access to opportunity tremendously. The ambitions of aspiring sitcom writers or movie directors no longer need be stymied by limited programming slots or the whims of studio executives. Today, access to a market of billions is just a few clicks away, allowing millions of content creators to find and monetize an audience while spawning a new subculture of YouTube celebrity. In fact, digital video advertising revenues in China are projected to actually exceed those generated by television ads by 2021, a tipping point that the U.S. will undoubtedly reach in a few years. Airbnb empowers over four million hosts to run a new kind of bed-and-breakfast business out of their homes. Uber, Lyft, and Chinese ride-hailing giant Didi Chuxing have created or expanded the passenger base for tens of millions of drivers around the world.

But a perceived loss of individual agency when things go wrong can make pursuing this new opportunity seem like a Faustian bargain for some. For millions of Amazon marketplace sellers, many of them small retailers whose customer reach has been dramatically expanded by the platform, success rests on the unpredictable whims of how the retailing giant’s algorithm ranks results to consumer search queries. While a majority of ride-hail drivers are satisfied with their experience with Uber and feel empowered by the work flexibility such platforms afford them, many encounter a sense of helplessness when they have no access to a human when deactivated by the platform. This frustration may not result in drastic action, but do not misinterpret its lack of publicity as a sign it isn’t growing.

Failure also becomes less predictable and explainable. In the past, if you were fired from your job, or your TV show was cancelled, there would likely be warning signals from your boss or an advance drop in your viewership numbers; if slowing foot traffic to your restaurant or retail store threatened the viability of your small business, the signs would build up over many months. Now, the fluctuations can be sudden and severe. A Google or Yelp machine-learning algorithm — which perhaps even the programmers at the company do not fully understand — changes, and your livelihood vanishes.

While technological progress makes a certain level of dehumanizing of our experiences inevitable, the ensuing blowback can be managed better by making good design choices. Part of the solution is to invest not just in greater accuracy but also in greater transparency. Yes, it is both necessary and commendable that YouTube and Facebook are working hard on improving their performance as society’s de facto censors, striving to do a better job separating the fake news from the real and the objectionable content from the benign (in part by hiring thousands of humans to supplement and improve their automated systems). But an equally important accompanying change must be to provide users with greater visibility into how these algorithms are actually making their decisions.

Such transparency doesn’t just enhance predictability or give users a better understanding of any negative outcomes they encounter. The understanding then makes it easier for users to identify when they have actually been unfairly treated and would enable platforms to provide some form of due process, a system to address the fallout from erroneous assessments by algorithms.

The active pursuit of greater transparency has yet another happy by-product: it will favor algorithms that are more easily explainable rather than being “black boxes.” Being able to provide a simple, clear and logical explanation for how automated systems make their choices will be critical to both social acceptance and to understanding biases as algorithms play a central role in deciding not just what content is censored or what transportation we get, but what college a student is admitted to, what medical treatment a patient receives, what prison sentences are meted out or perhaps even whether an autonomous weapon takes a life. The fact that so many producers do not know precisely how and why they’re being treated in a way they consider unfair signals greater trouble ahead if we do not correct our course.

We are wading deeper into an uncertain digital future, where platforms and their artificial intelligence–powered algorithms take on greater importance in the world, maybe even evolving into the most significant institutions of society. The emergence of entities that rival nation-state governments in their influence and power is not a new development: a glance through history reveals that organizations ranging from the Catholic Church and the British East India Company to Florence’s Medici family played similar roles at different times and in different places. What is new, however, is that this time around, we have the opportunity to explicitly design these new institutions, to code them in a way that favors positive societal outcomes. Transparency should be a central guiding principle. It may not eliminate the angst, but it will go a long way towards alleviating it.

Contact us at editors@time.com.

SPONSORED FINANCIAL CONTENT

Read More From TIME

EDIT POST