What broke the Internet? Was it the business model, the tech bros’ myopia, Russian infiltrators, millennials? Lately commentators have heaved a world-weary sigh and pointed at a more existential culprit: human nature. That sounds like a compelling, intuitive -answer—after all, people online do lots of awful and ugly stuff. As a diagnosis, though, it’s dangerously superficial and lets tech companies off easy. The worst qualities of online space aren’t inevitable: they’re a natural result of a lack of human dignity.
Dignity—literally “worthy” in Latin—matters because it’s an idea that we’ve returned to again and again over the centuries as a way of understanding how humans can live together decently, respectfully. But it’s been absent from our recent conversations about technological ills. This history puts in a new light what’s gone wrong online—and suggests how we might set it right.
Consider the relationship between dignity and conflict described by Donna Hicks, a Harvard conflict–resolution expert who has worked on the conflicts between Israelis and Palestinians and in Northern Ireland and Colombia. Over decades in the field, Hicks saw a repeating pattern: conflicts came about when people felt they were being disrespected and treated as worthless. “We long to look good in the eyes of others, to feel good about ourselves, to be worthy of others’ care and attention,” Hicks writes. When people are treated as if we don’t matter or aren’t due respect, we become vindictive, tribalistic and vengeful. “Research suggests that we are just as programmed to sense a threat to our dignity as we are to a physical threat,” Hicks writes. “Neuroscientists have found that a psychological injury such as being excluded stimulates the same part of the brain as a physical wound.”
In Hicks’ model, to be treated with dignity is to be seen and accepted for who we are, to be treated fairly and given the benefit of the doubt and to be offered independence and accountability. Online spaces fail us on these fronts. We’re encouraged to be something other than what we are. We struggle to be heard and seen. If “love is attention,” as Hicks wrote in her book Dignity, platforms intrude on these relationships, determining who and what gets attended to, on a mass scale, for the sake of ad clicks. To be online, in other words, is to be in a state of chronic, near constant dignity violation. It’s no surprise that anger, divisiveness, and bad behavior follow.
The authors of the UN Declaration of Human Rights — the key document in modern human rights law — understood this relationship between dignity and conflict. The Declaration’s first sentence begins with a recognition of “the inherent dignity and of the equal and inalienable rights of all members of the human family [that] is the foundation of freedom, justice and peace in the world.” Without dignity, they understood, peace is impossible.
Dignity in this sense emerged in the 1700s. In pre-Renaissance Europe, according to philosopher Michael Rosen’s book on the topic, the word meant something quite different — “high social status and the honors and respectful treatment that are due to someone who occupied that position.” Dignity was for nobles and kings, not for ordinary men, much less women.
The philosopher Immanuel Kant’s inversion of this idea was one of the great breakthroughs of the Enlightenment. All human beings are worthy of regal respect, Kant argued, not because of the “position that an individual occupies within a particular society… but the lawgiving function of morality, something that human beings carry inalienably within themselves.” Human beings ought to be treated as ends, not as means to an end, because we are capable of making moral decisions and restraining our impulses to enact them.
Online platforms like Twitter, YouTube and Facebook threaten our dignity in this sense. Growth hacking and -gamification—pursuits at the core of most consumer–facing -startups—are about nothing if not treating people instrumentally, as means to the end of growing active usership and revenue per user. “Metrics are like parasites, or undead spirits. They take over human beings,” writes the tech philosopher Joe Edelman. Advertising business models treat our attention and conversation as a means to the end of revenue generation.
Our ability to make choices that really reflect our values is subsumed by nudges to do more of what platforms want. As Edelman points out, if YouTube really cared about our intentions and values, when we logged on to learn ukulele it would try to serve that need — and then send us off to practice! — rather than tapping into our lizard brains with unrelated video to get us to spend more time watching.
We have always relied on tools to extend human capacity. Cooking with fire allowed us to create “external stomachs,” in the words of Kevin Kelly, a co-founder of Wired, that gave us better nutrition and allowed us to survive in new environments. Increasingly, the tools we use have turned on us. And social technologies don’t just treat us -instrumentally—they encourage us to look at one another that way as well, as a means to higher status or a better job or a measurement of self-worth.
Some of this may be inevitable in a capitalist system, but not all technologies are undignified. Consider video- chat tools like FaceTime, expressive ones like Illustrator, or screen readers that allow blind folks to participate in online conversation. None of these technologies is perfect. But there are some common threads: a focus on user empowerment and a genuine respect for his or her desires rather than -manipulation—a prioritization of open-ended exploration over tight feedback loops and optimization.
There are emerging examples of technologies that support us in overcoming our impulses to make moral choices. The moral psychologist Jonathan Haidt and his collaborators Caroline Mehl and Raffi Grinberg have developed an online education platform, OpenMind, which walks people through the cognitive biases that tend to distort our view of other people’s positions. According to their data, months after taking the curriculum, people are less likely to be dismissive of ideas simply because they come from political opponents. A more dignified tech approach is even catching on in Silicon Valley. Philosopher Joe Edelman runs a class in values-driven design, which has been embraced by high–ranking designers at Facebook and Apple, among other companies.
Over our history, we’ve found ways to create tools and spaces that call out and amplify the best parts of human nature. That’s the great story of -civilization—the development of technologies like written language that have moderated our animal impulses. What we need now is a new technological -enlightenment—a turn from our behaviorally optimized dark age to an era of online spaces that embrace what makes us truly human. We need online spaces that treat us as the unique, moral beings we are—that treat us, and encourage us to treat one another, with care, respect and dignity.
Pariser, who coined the phrase filter bubble in his book of the same name, is a fellow at the New America Foundation. @elipariser
More Must-Reads from TIME
- Where Trump 2.0 Will Differ From 1.0
- How Elon Musk Became a Kingmaker
- The Power—And Limits—of Peer Support
- The 100 Must-Read Books of 2024
- Column: If Optimism Feels Ridiculous Now, Try Hope
- The Future of Climate Action Is Trade Policy
- FX’s Say Nothing Is the Must-Watch Political Thriller of 2024
- Merle Bombardieri Is Helping People Make the Baby Decision
Contact us at letters@time.com