Sci-Fi Publishers Are Upset Over Heaps of Unwanted AI-Generated Pitches

4 minute read
Updated: | Originally published:

A surge in AI-generated spam pitches has forced a prestigious publisher of science fiction short stories to close its submissions, with some joking about the inherent irony given the genre has long covered the perils of machine learning.

Neil Clarke, the editor-in-chief of Clarkesworld—an American online Sci-Fi magazine that usually welcomes stories from new writers—shared a blog post addressing an increase in “spammy submissions.” While the pitches are genuine, Clarke said the work is not authentic; they are coming from people looking to cash an easy paycheck.

“We had been tracking the problem on a daily basis and were well aware of how quickly the situation was escalating,” Clarke tells TIME. “By the time we closed submissions, we had received 700 legitimate and 500 machine-written submissions in February.”

Sci-Fi publications have reportedly received the brunt of the deluge in AI-generated submissions, according to TechCrunch. The industry tends to offer higher rates to writers because publishers are required to pay a minimum $0.08 per word, according to the Science Fiction & Fantasy Writers Association, regulations that don’t apply to other genres.

The number of rejections Clarkesworld has issued has surged since the release of AI language models like ChatGPT in December. These submissions are not coming from within the Sci-Fi community, Clarke says, but rather opportunists encouraged by online tutorials on how to use ChatGPT for profit.

“Towards the end of 2022, there was another spike in plagiarism and then ‘AI’ chatbots started gaining some attention, putting a new tool in their arsenal and encouraging more to give this ‘side hustle’ a try,” Clarke wrote in his blog post. “It quickly got out of hand.”

Created by San Francisco-based firm OpenAI, ChatGPT is an AI-based text generator that quickly became popular after its release. In January, ChatGPT reached 100 million monthly users who can ask the generator simple questions or to complete lengthier writing projects such as essays and articles. Generative AI may have captured the minds and wallets of Silicon Valley, but its outputs often contain factual errors and bigotry.

Read More: The AI Arms Race Is Changing Everything

OpenAI is trying to mitigate the risk of its own product by “watermarking” generated work so it is easier to identify. It works by tweaking word choices that ChatGPT selects so they are subtle to the user but predictable to those looking for signs of AI generation.

“We want it to be much harder to take a GPT output and pass it off as if it came from a human,” said OpenAI guest researcher Scott Aaronson during a lecture at The University of Texas at Austin. “This could be helpful for preventing academic plagiarism, obviously, but also, for example, mass generation of propaganda,” Aaronson said.

But in the meantime, Clarke said this will require staff to review submissions—an expense few in the short fiction industry can afford. “I’ve reached out to several editors and the situation I’m experiencing is by no means unique,” he wrote.

Clarke says his biggest concern is how this will impact new writers who can be “buried under the noise of machine-generated submissions.”

“Many of the solutions that people have suggested would also disproportionately impact new or foreign authors producing legitimate work. These are not communities we should be leaving behind for the sake of convenience,” he adds.

Some companies are embracing AI-generated books. More than 200 e-books listed in Amazon’s Kindle store had ChatGPT as an author or co-author as of mid-February, Reuters reported. That includes a book titled How to Write and Create Content Using ChatGPT.

Clarke says AI-generated stories will remain banned at Clarkesworld until the legal and ethical concerns with technology and copyright ownership are cleared up.

More Must-Reads From TIME

Write to Armani Syed at armani.syed@time.com