Sam Altman, CEO of OpenAI, during a global summit on the safe use of artificial intelligence on Nov 2, 2023.
PA Wire/PA Images/AP
Ideas

Until recently, employees exiting OpenAI, the world’s leading AI lab, faced an unenviable two-part dilemma. Their off-boarding paperwork included a non-disparagement agreement barring them from criticizing OpenAI for life, as well as a nondisclosure agreement ensuring they wouldn’t reveal the non-disparagement agreement. If you chose not to sign, you risked forfeiting all vested equity in the company.

OpenAI CEO Sam Altman recently tweeted that this language was being removed from the company’s exit documents. But a broader culture of secrecy remains. Now, a brigade of current and former employees of frontier AI labs, including OpenAI, have blown the whistle in an open letter with a clear message: the development of the most powerful technology in human history must proceed in secrecy no longer.

Read More: Employees Say OpenAI and Google DeepMind Are Hiding Dangers From the Public

I’m a 19-year-old advocate for safer AI. The past few weeks, in my view, signal an evisceration of trust in the companies helming a world-changing project: ushering in superhuman intelligence. The people developing large-scale AI, and externalizing its large-scale risks, seemingly couldn’t care less about what anyone else thinks—it’s a new kind of Manifest Destiny. Today, a handful of powerful men in Silicon Valley are seeking to raise trillions of dollars to build a machine god. Along the way, they’ve also made themselves deities in the pantheon of CEOs writing the story of our collective future—without the rest of us having a say. As a young person who will inherit that future, I’m deeply alarmed.

To be clear, I’m also a deep believer in what AI can do for us. I want to see AI help us cure disease, eliminate poverty, and fight climate change. I marvel at AlphaFold, Google DeepMind’s world-leading system that can predict how proteins fold and accelerate drug discovery. But I refuse to be tranquilized by tales of boundless abundance made possible only by accelerating AI at any cost. At this inflection point in history, our world faces a transformative technology, and we’ve only caught a glimpse of its dangers—ranging from present-day discrimination and disinformation to potential catastrophe if advanced AI could, say, advise a terrorist on how to build a bioweapon. The observed arc of creative destruction, whether from the invention of the printing press or the dawn of the internet, may tell us little about what lies ahead. So as Sam Altman himself once said, humanity can’t trust one person or one company to decide the future. Not when we’re gambling with the safety of my generation—and every generation before and after it.

We know that people in power won’t voluntarily change course: companies like OpenAI, Meta, and Anthropic reportedly have not shared pre-deployment access to their latest models with the U.K. AI Safety Institute, rebuffing promises they made publicly. So I’m not just sitting idly by; we can’t afford to. Encode Justice, the movement of young people for safe and equitable AI that I created, is putting forth a constructive vision for the future we want to see. Ahead of last month’s Global AI Safety Summit in Seoul, we released AI 2030, a youth-authored platform urging global action before 2030 on guardrails for AI. In it, we outline 22 recommendations to build trust and human connection, protect our fundamental rights and freedoms, secure our economic future, ban fully automated weapons of destruction, and cooperate globally for a safer today and tomorrow.

And we aren’t standing alone. We’re joined by a coalition of current and former world leaders, AI experts, and civil society actors, as well as university clubs and youth groups representing more than 15,000 of our peers. AI is otherwise rife with factionalism, but ours is a large and broad team supporting a 22-point agenda, including Turing Award-winning “godfather of AI” Yoshua Bengio, former Irish president and UN High Commissioner for Human Rights Mary Robinson, actor Joseph Gordon-Levitt, economist Daron Acemoglu, former Taiwanese digital minister Audrey Tang, and OpenAI whistleblower Daniel Kokotajlo. We invite you to join us.

Each of us deserves a place in the conversation that will drastically alter the course of our futures. Right now, it’s unfolding entirely behind closed doors. Encode Justice’s work shows that young people like us must be brought to the table; we’re already proposing solutions and organizing coalitions to make them a reality. We’re certainly not waiting until 2030 to begin our ambitious reimagination of AI as a force that benefits us all, nor will we defer to the judgment or goodwill of CEOs when the stakes are so high.

For humanity’s sake, we must all take back the mantle and write the story of our collective future for ourselves. And we must do it now.

Sneha Revanur is the founder and president of Encode Justice.

More Must-Reads from TIME

Contact us at letters@time.com.

No, Today’s AI Isn’t Sentient. Here’s How We Know
Why I Left Billions on the Table to Fight for Climate Change
Motherhood in a Wheelchair Isn't Easy. But My Disability Will Be My Son's Strength
How the World Is Failing Victims of Conflict-Related Sexual Violence
We Shouldn’t Have to Be Willing to Die to Give Birth in the U.S.
Immigration Is a Relentless Maze
EDIT POST