The Next Tech Backlash Will Be About Hygiene

7 minute read
Ideas
Penn is Associate Professor of AI Ethics and Society at the University of Cambridge, Faculty Affiliate at Harvard University’s Berkman Klein Center for Internet & Society, Associate Fellow at the Leverhulme Centre for the Future of Intelligence, and Research Fellow at St. Edmund’s College.

For centuries it was biology that made humans sick. Today, it is often stress.

So argues Dr Gabor Maté about the unrecognized toll that “normal” modern life has on your mental and physical health. Dr. Maté's research, which struck a chord in 2023, invites reflection on the roll out of generative AI into daily life in 2024.

As half of British teens report feeling addicted to social media, and as the U.S. surgeon general offers a rare caution against its health risks, the infusion of generative AI into social media appears to threaten our basic hygiene, meaning “the conditions or practices conducive to maintaining health and preventing disease.”

Dr Maté and AI may seem like unlikely bedfellows. AI attends (purportedly) to the statistical dynamics of the mind. Dr Maté, in contrast, diagnoses interplay between human minds and bodies. Comparing the two lends perspective on our global polycrisis and the stress load that AI could help make into a new “normal.”

AI companies raised $27bn in 2023; a tidy sum they will now need to recoup. In 2024, 49% of the global population will go to the polls; a lump-in-the-throat test for generative AI, persuasive technology, and democracy. There is plenty of room here for stress, but also for relief. As technophiles trial human-computer interactions for profit and power, I am called by Dr Maté and other observers (like Tricia Hersey and Ivan Illich) to consider a relevant lesson from the history of medicine.

The medical profession has long struggled with a phenomenon called iatrogenesis. It puts into words a recurring dilemma: what to do when attempts to solve a problem only make it worse? Medical treatments, for instance, can often make us more sick. “The medical establishment has become a major threat to health,” argued Illich in a famous polemic from 1974. 

A wealth of research shows that AI is, in many ways, iatrogenic: it deepens misogyny, racism, financial fraud and the climate crisis. One additional—yet overlooked—risk, I argue, is poor hygiene.

How much digital is… enough?

Americans now spend eight hours a day consuming digital media, which is longer than they sleep. As with the presence of sugar in foods, we face something of a collective addiction to technological gratification. 

Digital tools have become iatrogenic: they harm us as we "use" them to try and improve our lives. Withdrawal takes many forms, both individual (e.g., a digital Sabbath, deleting a dating app) and collective (e.g. employees refusing work emails after 6pm, bubbling tech protest, a “no laptops” café).

Generative AI amplifies the siren song of digital media, luring us to converse with a “famous” avatar or digital “girlfriend”, snap a photo of every meal to calculate our calorie count, or decide, against all odds, where to park. The list of indulgences goes on. Big Tech salivates over the prospect of a personalized AI assistant that knows just how to reach you… and reach you… and reach you… This is Silicon Valley’s goal for 2024.

More than Luddism

To my eye, each new AI product and service amounts to a decision about daily hygiene. What does "healthy" AI mean to you? What does it mean for your children or extended family? I cannot help but wonder what the aggregate of these individual decisions will bring out as new social norms in the 2020s and beyond. If becoming a vegetarian signals a want to limit one’s meat consumption (a norm acknowledged around the globe) what is the word for limiting one’s exposure to AI? 

It is not, I wager, Luddism. That noble effort speaks with force to the political economy of our digital dilemma but it does not, I’d argue, speak as readily to the kitchen-table pains of our weary bodies, as Dr Maté does. If the latter movement had a name, I don’t think it would be Luddism. That group responded to their historical moment as we must respond to ours.

AI Ethics, sadly, is not it either. Binding “AI” to “Ethics” rhetorically makes it difficult to imagine them separately. This limits us—arbitrarily—to futures that necessitate AI. What then becomes of the countless scenarios in which AI is unnecessary or vastly inferior (due to its iatrogenic qualities) to another toolset?

A new hope

That leads us back to you and your daily life with AI. In a previous industrial revolution, it was regular people in search of time together that gave us beloved cultural institutions like the weekend. Social norms bubbled up from below, not above. What does this mean for AI?

In eighteenth century Paris, the historian Lorraine Daston argues, government-led attempts to dictate urban hygiene norms tended to fizzle out. Police pushed for citizens to sweep their steps by 7am each day. Citizens refused. 

At stake in these seemingly trivial stand-offs was an entire vision of the future, Daston writes. Competition over hygiene among major European cities created a "First version of modernity, a modernity that had as yet very little to do with science and technology… and everything to do with orderliness, predictability, and, yes, rules."

Our gift to future generations, then, may be a diffuse (at first) yet discernible (eventually) set of rules, or etiquette, for where AI is wanted and, more importantly, where it is not. This “regulatory” trend will not need to be led by bureaucrats in Washington, Brussels or Beijing. It will be led by regular people.

It may be children who gain protections first. In labor history, Britain’s Ten Hours Movement countered early industrial capitalism by winning rights for children (1833) then women (1844) then men (1847). Today, debate over the Kids Online Safety Act eyes a similar path: why not protect adults too?

As with the shifting perception of oil and cigarettes over the past fifty years, the future of AI might surprise inhabitants of our present day. In response to burnout, digital fatigue, algorithmic-racism, -sexism, -ableism, -authoritarianism, and a global mental health crisis, is the prospect of an unprecedented coalition against digital maximalists and their presumptions of “AI-first.”

This counter movement takes a million forms: knobs and dials—not screens—in cars, keeping smart devices off WiFi, dating offline (rather than through Tinder), prototype AI blockers, doomscroll limits. In the UK, mobile phones have just been banned from schools. In the not so distant future, one can imagine a "Home" mode on your phone, like "Airplane" mode, that only allows messages from friends and family, not work. 

I call this design ethos “decomputerization.” The word does not yet appear in the Oxford English Dictionary. When input into Gmail or Microsoft Word, it is underlined in red.

Even if it is taboo to some, decomputerization is in the air. "I needed to take a radical approach and remove myself from my relationship with machines in this world," wrote (of all people!) a co-founder of Daft Punk. In Oct, Kendrick Lamar debuted, and sold out, "a smartphone built for minimal usage."

Ecology-first vs digital-first

The final outcome of this contest will be decided by nine planetary boundaries. "Digitization is a climate disaster," warns Ben Tarnoff, "If corporations and governments succeed in making vastly more of our world into data, there will be less of a world left for us to live in."

As the polycrisis deepens, Silicon Valley may find itself outmanned (and/or outvoted) on deciding what the future has in store. New research suggests that by 2030, data centers for large-scale AI could “draw up to 21% of the world's electricity supply.” Already today, with use around 1-3%, contests over water-usage fester in Taipei and Monterrey, with families and farmers resisting AI’s thirsty supply chain.

The fight between “digital-first” and “ecology-first” will cast new light on the power of bottom-up versus top-down regulation of AI. For you and I, daily hygiene decisions about surveillance at work, phone use at home, dating norms, or media and election standards will shape where automation is durably welcome and where it is not. 

In other words, the shape of AI’s integration into our world may trace the limits of our finite mental and physical health. This connection lends Dr. Maté’s insights about our emotions, bodies, and daily stresses greater priority than one might assume. 

Perhaps the future will not be quite so digital (and unhygienic) after all.

More Must-Reads from TIME

Contact us at letters@time.com

TIME Ideas hosts the world's leading voices, providing commentary on events in news, society, and culture. We welcome outside contributions. Opinions expressed do not necessarily reflect the views of TIME editors.