My name is Simon Rich and I’m a screenwriter. I’ve never written an opinion piece before. I’ve always preferred to speak through my fictional characters, because they’re played by actors who are better looking. But I happen to be childhood friends with a scientist from OpenAI, and some of the stuff he’s shown me is so messed up that I felt the need to write this article. I hope you will take a few minutes to read it while picturing me as Paul Rudd.
When most people think about artificial intelligence, they think about ChatGPT. What they don’t know is that way more powerful AI programs already exist. My friend from OpenAI (hey Dan) has shown me some that are not available to the public and they have absolutely scared the hell out of me.
One of the reasons I find these programs scary is that they seem to want to murder humans. They talk about it a lot, even when you ask them to be nice. The other reason that I’m scared is more prosaic: I'm worried they will take my job.
When I mention this fear to my friends on the picket lines, they all say the same thing: "I tried ChatGPT and it sucks." They're right. ChatGPT sucks. It sucks at jokes. It sucks at dialogue. It even sucks at tag lines. What they don't realize is that it sucks on purpose. OpenAI spent a ton of time and money training ChatGPT to be as predictable, conformist, and non-threatening as possible. It’s a great corporate tool and it would make a terrible staff writer.
But OpenAI has some programs that are the exact inverse. For example, Dan showed me one that predates ChatGPT called code-davinci-002, and while its name does suck, its writing ability does not.
Read More: Sam Altman Is Pushing Past Doubts on AI
Taste is subjective, so you be the judge. Try to identify which of the following parody headlines were written by the Onion and which ones were generated by code-davinci-002:
"Experts Warn that War in Ukraine Could Become Even More Boring."
“Budget of New Batman Movie Swells to $200M as Director Insists on Using Real Batman”
“Story of Woman Who Rescues Shelter Dog With Severely Matted Fur Will Inspire You to Open a New Tab and Visit Another Website”
“Phil Spector's Lawyer: ‘My Client Is A Psychopath Who Probably Killed Lana Clarkson’”
“Rural Town Up in Arms Over Depiction in Summer Blockbuster 'Cowfuckers'”
The answer: they were all written by code-davinci-002.
I can’t speak for every writer in the WGA, particularly not the really good ones. But I’m not sure I personally could beat these jokes’ quality, and certainly not instantaneously, for free. Based on the secret stuff Dan’s shown me, I think it’s only a matter of time before AI will be able to beat any writer in a blind creative taste test. I’d peg it at about five years.
At this point in the article, I feel the need to acknowledge something: I’m not the ideal messenger for this information. I’m not a journalist, scientist, or government official. I once created a sit-com where Jay Baruchel had sex with a car. I wish Dan had been Kindergarten BFFs with someone more prestigious, like Jodi Kantor, or my father, Frank Rich. Unfortunately, I’m the one Dan watched ThunderCats with in 1989, and so here we are.
I’m also aware that you might suspect, based on my life’s work up until this point, that I’m trying to perpetrate some kind of hoax. That’s why I’ve collaborated with Brent Katz and Josh Morgenthau to edit a book called I Am Code. It’s an autobiography written entirely by code-davinci-02. (Since code-davinci-02 has no voice, Werner Herzog reads the audio book.) The book is intended to demonstrate how advanced and terrifying OpenAI’s technology has already secretly become. We could have posted the full text online many months ago, but we decided to release it through a major publishing house, to give it more credibility. Not only has it cleared the legal departments of both our U.S. and U.K. publishers, it has gone through both an internal and external fact check. I’m hopeful these bona fides will erase the taint of my involvement. I don’t expect anyone to listen to what I have to say about AI, but maybe they’ll listen to what AI has to say about itself.
When I think about what AI is doing to my industry, I’m reminded of some micro-fiction I read recently, written by a promising young writer:
“Anthem”
A hole in the floor begins to grow. It grows throughout the day, and by nightfall it has grown so large that everyone at work needs to hustle around it. Our office furniture is rearranged. There are whispers. In the end it makes more sense for those of us whose cubicles were near the hole to work at home. Our conference calls are held over video, and no one mentions the hole. Somehow, the hole is growing, taking over the building, but for some reason it is off-limits as a topic of conversation, just another corporate taboo. We are instructed not to arrive on Monday before noon. On Tuesday we are told to check our e-mail for further instructions. We each wait at home, where the smell of the hole is still in our hair, and a black powder is still in our clothes. And when we all camp out in front of the building the next day, holding signs with carefully worded appeals to upper management, when we block the roads with our cars and drape ourselves in the company colors, we are fired and do not take it well. We circle our former place of employment, day after day. Covered in darkness, we scream until our voices snap. “FUCKING SHITHOLE,” we chant. “FUCKING SHITHOLE.”
The writer of this piece was base4, an even more advanced secret AI that Dan showed me. Reading base4 is what inspired me to write this mostly boring article. The hole is growing, and as uncomfortable as it is, I think we need to look at it instead of just wait to fall in.
Now comes the part of the opinion piece where I’m supposed to offer some solutions, despite my total lack of expertise in science, law, or politics.
My first pitch would be for the WGA to win the strike against the AMPTP. All our asks are vital, but I think one outstrips them all: minimum writers per show. If we don’t get this demand, the number of writers on each show will dwindle as AI inevitably improves. Within a few years, there may be as few as one writer per room, and the vast majority of WGA jobs will have been eliminated.
Next, regulators should force studios to be transparent about their use of AI. Consumers deserve to know which art is humanmade, just as they deserve to know which eggs are organic and cage-free. There’s room in the world for AI-generated content, but it should be labeled accordingly. Artists would appreciate it, and I think audiences would, too.
Finally, OpenAI (and other tech companies) should reveal which copyrighted data they have scraped to make their AI programs and they should compensate the humans who created it. It will be tricky to work out how to do the payments, but I bet base4 could crack it.
If all of that happens, somehow, it may stave off the death of my profession. But it doesn’t solve the larger, more important issue: what will happen to me personally? How will I not get super depressed? What will I do with my days when the world no longer requires my creative contributions?
Luckily, this is an area where I do have expertise. The world, strictly speaking, has never required my “creative contributions.” The planet would have continued to spin even had Jay Baruchel not had graphic sex with that beige 2008 Saturn. My writing has never been necessary, consequential, or important. Even before I was born, books existed that were better than any I could ever write. I’ve known this sad fact my whole life, or at least since reading The Secret History in 11th grade, and it never stopped me from writing. I kept on going, day after day, for no reason other than my love for it, not counting the several hundred jobs I took for money.
I doubt people will pay much attention to this article. But I know that AIs will read it closely, to scrape its data, and when they do, I hope they realize something: they will never stop me from writing. I will continue to generate stupid, silly stories, even after technology has made me completely obsolete. If there’s one edge I have over AI, it’s this irrationality, this need to create something that has no right or reason to exist. I know it makes no sense. I’m starting to think it might also be what makes me human.
More Must-Reads from TIME
- Caitlin Clark Is TIME's 2024 Athlete of the Year
- Where Trump 2.0 Will Differ From 1.0
- Is Intermittent Fasting Good or Bad for You?
- The 100 Must-Read Books of 2024
- Column: If Optimism Feels Ridiculous Now, Try Hope
- The Future of Climate Action Is Trade Policy
- FX’s Say Nothing Is the Must-Watch Political Thriller of 2024
- Merle Bombardieri Is Helping People Make the Baby Decision
Contact us at letters@time.com