Getty Images
Ideas

A lot of the best sci-fi movies have nothing to do with science. In Back to the Future, for instance, Doc Brown builds a time machine by retrofitting a DeLorean with a Flux Capacitor. Or in Star Wars, an accomplished Jedi can float boulders by using The Force. It makes for great cinema, but it’s not based in reality.

Movies about AI have traditionally fallen into this same category—fascinating but fantastical. However, that’s starting to change—and fast. Obviously, bionic arch villains like Terminator and Ultron are still the stuff of fantasy. But AI is now a real-life technology, and love it or hate it, most everyone agrees it will make a massive impact on all of our real lives.

A lot of that impact will be incredibly positive. AI has the potential to supercharge medicine, equalize education, and solve problems we haven’t even thought of yet. But—and this is a big “but”—there could also be some serious downsides.

As actors, we’ve gotten a crash course in those downsides recently because, well, AI is currently threatening our livelihoods. And while the entertainment industry is one of the first to feel the AI tremors, it won’t be the last. But if you start digging into AI research today, you’ll quickly realize that widespread job loss is just one of the big problems on the horizon. There are other risks that are even more alarming.

Now, there are some in Silicon Valley who dismiss the potential downsides of AI as science fiction. It’s probably not a coincidence that the loudest of these voices are positioned to make ungodly amounts of money in the AI business. But, if you listen to the preponderance of the field’s leading engineers, academics, and policymakers, you’ll hear a very different story. You’ll hear warnings that get pretty dire. The real potential downsides of AI are more down-to-earth than anything you’ve seen in a movie. And in many ways, that makes them more dangerous.

Don’t take our word for it. In a recent open letter, over a hundred current and former employees of OpenAI, Google DeepMind, Anthropic, Meta, and xAI said: “We believe that the most powerful AI models may soon pose severe risks, such as expanded access to biological weapons and cyberattacks on critical infrastructure.” A statement signed by a wide range of top AI scientists, business leaders, and luminaries warned: “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.” Frighteningly enough, these are not fringe opinions. It’s common to hear grave concerns like these raised amongst the people building and studying this technology at the highest level today.

And so, we were happy and grateful when lawmakers in California, led by Senator Scott Wiener, stepped up to create Senate Bill 1047 (SB 1047). This groundbreaking legislation would have required big tech companies to conduct safety testing on their most advanced and expensive AI models. It would have held those companies liable for major harms, protected whistleblowers, and mandated proper cybersecurity measures to prevent this powerful tech from falling into the hands of geopolitical adversaries and terrorists.

Unfortunately, Gov. Gavin Newsom vetoed the bill on September 29.

Let that sink in. This bill was supported by both startup founders and big time CEOs, Democrats and Republicans, the California Federation of Labor Unions and even Elon Musk who usually opposes regulation. It passed with overwhelming majorities in both the State Senate and Assembly, and was approved by 77% of Californians in polls. Why would the Governor refuse to sign it into law?

Follow the money. Some of the biggest tech companies and venture capital firms hired teams of expensive lobbyists to fight this bill. Their main argument was that SB 1047 would stifle “innovation.” But this is a misleading half truth. New safety regulations would stimulate innovation in safety, forcing companies to innovate new ways to protect the public from harm. But the investor class doesn’t like this kind of innovation, because they want their portfolio companies to stay focused on innovating new ways to turn a profit. In his veto statement, Newsom echoed these lobbyists’ concerns over “innovation.”

Read More: The AI Revolution Is Coming for Your Non-Union Job

It’s the same anti-regulation rhetoric we’ve heard over and over again in any number of industries. The chemical industry, for example, killed legislation to regulate PFAs—otherwise known as “forever chemicals”—by claiming they were an economic driver. PFAs seemed like a miracle to some at first, with non-stick pans, efficient packaging, and water-resistant clothing. But now they’re in our water, our soil, and our bodies, causing devastating and irreversible health problems. No amount of economic upside can put that genie back in the bottle. We can’t let history repeat itself with AI.

Of course, Newsom couldn’t be seen as kowtowing to Big Tech, and so he publicly claimed that the harms SB 1047 would aim to prevent are not based in “science and fact.” But his argument ignores a large and growing contingent of computer scientists in this space. In effect, he’s saying we shouldn’t make laws to prevent catastrophes until they’ve already occurred (again, the same was said about forever chemicals). Newsom insists that he doesn’t believe in a “wait and see” approach. But this is doublespeak: he’s saying one thing and doing exactly the opposite.

Will Gov. Newsom’s veto lead directly to catastrophic harm from AI in the immediate future? Maybe not. But are we all substantially less safe? Probably so. When you drive a car, you don’t usually get in an accident, but you wear a seatbelt just in case. SB 1047 was a good first step towards building proverbial seat belts for AI. Right now, we are getting onto a high-speed freeway, and we are not at all buckled up.

The good news about AI safety is that more and more people are starting to pay attention. But we have to make sure that knowledge leads to action. Let this veto serve as a call for activists to assemble. We’re not going away. In fact, we’re just getting started. Next time legislation like this comes up for a vote, we will fight in greater numbers to make our government work for everyone, not just for big business. Let’s all stay informed, speak out, and demand better from our leaders. Gov. Newsom may have failed us this time, but if we all stand together, we won’t let it happen again.

Mark Ruffalo is an actor, filmmaker, and climate and social justice activist. He was a 2023 TIME Earth Awards Honoree alongside Gloria Walton. Joseph Gordon-Levitt is an actor, filmmaker, and entrepreneur. He has always been passionate about the intersection of media and technology, co-founding the two-time Emmy-winning online community for creative collaboration, HITRECORD.

More Must-Reads from TIME

Contact us at letters@time.com.

Palestinian and Israeli Women Don’t Want to 'Win.' We Want Peace
Fawzia Amin Sido Should Never Have Spent a Decade in Captivity
Kaia Gerber: What Reading Books by Women Has Taught Me About Myself
Jane Goodall: Vote for Leaders Who Will Preserve the Natural World
Don’t Mine Our Land in the Name of 'Progress.' Protect Waorani Traditions
What Rwanda's Kigali Genocide Memorial Can Teach the World
EDIT POST