It Will Take More Than Robots to Manage the Robots

5 minute read
Ideas
Gibbs, a former writer and editor in chief at TIME, is the director of the Shorenstein Center and the Edward R. Murrow Professor of the Practice of Press, Politics and Public Policy at Harvard Kennedy School. She is the co-author, along with Michael Duffy, of two best-selling presidential histories: The President’s Club: Inside the World’s Most Exclusive Fraternity and The Preacher and the Presidents: Billy Graham in the White House.  

By now the sophistication of false information about Israel and Hamas is clear to anyone who opened their phone this week. As tech platforms rely ever more on artificial intelligence in their battle against disinformation, the havoc in the Middle East exposes the limits of technology to police technology’s harms. It is more important than ever that we understand how global platforms like Meta, Google, and X, the platform formerly known as Twitter, make decisions about what content gets amplified and what taken down. “Just trust us” doesn’t cut it when they’re losing the battle against ever more well-armed propagandists.

It’s not as though platforms didn’t know they had a huge disinformation problem that human content moderators alone could not solve. Two years ago, Facebook whistleblower Frances Haugen detailed for Congress how growth and profit drove decisions: "The result has been more division, more harm, more lies, more threats and more combat,” she testified. “In some cases, this dangerous online talk has led to actual violence that harms and even kills people.”

But Congress still can’t agree on basic guardrails for holding platforms accountable. My Harvard Kennedy School colleague, computer scientist Latanya Sweeney, estimates that within a year the internet as we’ve known it will have been supplanted by one where the vast majority of content comes from bots. And if there’s one thing we’ve learned as generative AI struts out of the lab and into our feeds and phones and lives, it is how confidently and flamboyantly it lies.

We need a wider range of voices to address the responsibility platforms play in protecting the health of our information ecology. I don’t just mean technologists and psychologists, economists and ethicists, though all need a seat at the table: I mean both the ghosts and the machines. The people affected by these technologies in ways the people who built them did not foresee, need to be heard as well.

More From TIME

Archiving Facebook's internal documents

It falls to the growing discipline of Public Interest Technology, cousin to Public Health and Public Interest Law, to advance research and shape debate over the colonization of our public square by private companies. With that challenge in mind, this week former President Barack Obama comes to Harvard for a summit on the future of the internet. Harvard Law School will launch a new Applied Social Media Lab to “reimagine, rebuild, and reboot social media to serve the public good.” Meanwhile, at the Harvard Kennedy School, Sweeney’s Public Interest Technology Lab will release FBArchive.org, a new research platform that allows researchers and journalists to study the internal Facebook documents that Haugen shared.

The major platforms operate as lead-lined vaults with blackout curtains. We know the harms that come out but still have little idea of decisions that go into them. Are content moderation guidelines consistently enforced, and if not, what exceptions are allowed and who has authority to grant them? How is international content moderation conducted? Is it primarily reliant on sophisticated AI algorithms, or does it depend on manual assessments by English speakers reviewing Google Translate results?

Given all that is opaque about how tech platforms make decisions about privacy, content moderation, and algorithm design, FBarchive was built to provide some measure of transparency. Meta—Facebook's parent company—does a great deal of research about its products, including Facebook, and has vast data about the impact of changes to their design. The documents released by Haugen reveal, for instance, that moderators considered “tradeoff budgets,” so that even when demonstrably harmful content was proliferating in Ethiopia or Myanmar, Meta required that they calculate the financial costs before reducing the reach of such content or taking it down.

The new online archive creates a space for people to add context and insights at scale, in a kind of collaborative research that would otherwise be impossible. It is just one example of the marriage of minds and machines, which protects individual privacy while allowing researchers to understand the tradeoffs facing business leaders who are balancing their responsibilities to the public and to their shareholders.

Putting humans at the center

This is what public interest technologists call “people-centered problem solving,” and it’s impossible to create a “public interest internet” without engaging the very human people who shape how the internet operates. Assuming Congress is unlikely to step into the breach any time soon, for the moment we have to rely on the judgment and even self interest of leaders of tech companies to agree on shared industry standards that protect individual rights, our public goods, and information ecosystem—and, ultimately, to protect democracy.

Twitter was once something of a model for sharing data and community standards—but not anymore. And the starting point has long been obvious: far greater transparency about how algorithms are engineered and how content decisions are made. That would grant researchers, journalists, judges, civil society, and policymakers more power to play their essential roles in shaping a healthy public square. Even as we marvel at the ability of our new robot friends to write a sonnet or predict how proteins fold, it’s worth remembering that without the experience and values humans bring to the equation, the solutions are likely to fail—especially when these technologies make it so much easier to dehumanize each other.

More Must-Reads from TIME

Contact us at letters@time.com

TIME Ideas hosts the world's leading voices, providing commentary on events in news, society, and culture. We welcome outside contributions. Opinions expressed do not necessarily reflect the views of TIME editors.