If you’ve spent any time on Twitter lately, you may have seen a viral black-and-white image depicting Jar Jar Binks at the Nuremberg Trials, or a courtroom sketch of Snoop Dogg being sued by Snoopy.
These surreal creations are the products of Dall-E Mini, a popular web app that creates images on demand. Type in a prompt, and it will rapidly produce a handful of cartoon images depicting whatever you’ve asked for.
More than 200,000 people are now using Dall-E Mini every day, its creator says—a number that is only growing. A Twitter account called “Weird Dall-E Generations,” created in February, has more than 890,000 followers at the time of publication. One of its most popular tweets so far is a response to the prompt “CCTV footage of Jesus Christ stealing [a] bike.”
If Dall-E Mini seems revolutionary, it’s only a crude imitation of what’s possible with more powerful tools. As the “Mini” in its name suggests, the tool is effectively a copycat version of Dall-E—a much more powerful text-to-image tool created by one of the most advanced artificial intelligence labs in the world.
That lab, OpenAI, boasts online of (the real) Dall-E’s ability to generate photorealistic images. But OpenAI has not released Dall-E for public use, due to what it says are concerns that it “could be used to generate a wide range of deceptive and otherwise harmful content.” It’s not the only image-generation tool that’s been locked behind closed doors by its creator. Google is keeping its own similarly powerful image-generation tool, called Imagen, restricted while it studies the tool’s risks and limitations.
The risks of text-to-image tools, Google and OpenAI both say, include the potential to turbocharge bullying and harassment; to generate images that reproduce racism or gender stereotypes; and to spread misinformation. They could even reduce public trust in genuine photographs that depict reality.
Text could be even more challenging than images. OpenAI and Google have both also developed their own synthetic text generators that chatbots can be based on, which they have also chosen to not release widely to the public amid fears that they could be used to manufacture misinformation or facilitate bullying.
Read more: How AI Will Completely Change the Way We Live in the Next 20 Years
Google and OpenAI have long described themselves as committed to the safe development of AI, pointing to, among other things, their decisions to keep these potentially dangerous tools restricted to a select group of users, at least for now. But that hasn’t stopped them from publicly hyping the tools, announcing their capabilities, and describing how they made them. That has inspired a wave of copycats with fewer ethical hangups. Increasingly, tools pioneered inside Google and OpenAI have been imitated by knockoff apps that are circulating ever more widely online, and contributing to a growing sense that the public internet is on the brink of a revolution.
“Platforms are making it easier for people to create and share different types of technology without needing to have any strong background in computer science,” says Margaret Mitchell, a computer scientist and a former co-lead of Google’s Ethical Artificial Intelligence team. “By the end of 2022, the general public’s understanding of this technology and everything that can be done with it will fundamentally shift.”
The copycat effect
The rise of Dall-E Mini is just one example of the “copycat effect”—a term used by defense analysts to understand the way adversaries take inspiration from one another in military research and development. “The copycat effect is when you see a capability demonstrated, and it lets you know, oh, that’s possible,” says Trey Herr, the director of the Atlantic Council’s cyber statecraft initiative. “What we’re seeing with Dall-E Mini right now is that it’s possible to recreate a system that can output these things based on what we know Dall-E is capable of. It significantly reduces the uncertainty. And so if I have resources and the technical chops to try and train a system in that direction, I know I could get there.”
That’s exactly what happened with Boris Dayma, a machine learning researcher based in Houston, Texas. When he saw OpenAI’s descriptions online of what Dall-E could do, he was inspired to create Dall-E Mini. “I was like, oh, that’s super cool,” Dayma told TIME. “I wanted to do the same.”
“The big groups like Google and OpenAI have to show that they are on the forefront of AI, so they will talk about what they can do as fast as they can,” Dayma says. “[OpenAI] published a paper that had a lot of very interesting details on how they made [Dall-E]. They didn’t give the code, but they gave a lot of critical elements. I wouldn’t have been able to develop my program without the paper they published.”
In June, Dall-E Mini’s creators said the tool would be changing its name to Craiyon, in response to what they said was a request from OpenAI “to avoid confusion.”
Advocates of restraint, like Mitchell, say it’s inevitable that accessible image- and text-generation tools will open up a world of creative opportunity, but also a Pandora’s box of awful applications—like depicting people in compromising situations, or creating armies of hate-speech bots to relentlessly bully vulnerable people online.
Read more: An Artificial Intelligence Helped Write This Play. It May Contain Racism
But Dayma says he is confident that the dangers of Dall-E Mini are negligible, since the images it generates are nowhere near photorealistic. “In a way it’s a big advantage,” he says. “I can let people discover that technology while still not posing a risk.”
Some other copycat projects come with even more risks. In June, a program named GPT-4chan emerged. It was a text-generator, or chatbot, that had been trained on text from 4chan, a forum notorious for being a hotbed of racism, sexism and homophobia. Every new sentence it generated sounded similarly toxic.
Just like Dall-E Mini, the tool was created by an independent programmer but was inspired by research at OpenAI. Its name, GPT-4chan, was a nod to GPT-3, OpenAI’s flagship text-generator. Unlike the copycat version, GPT-3 was trained on text scraped from large swathes of the internet, and its creator, OpenAI, has only been granting access to GPT-3 to select users.
A new frontier for online safety
In June, after GPT-4chan’s racist and vitriolic text outputs attracted widespread criticism online, the app was removed from Hugging Face, the website that hosted it, for violating its terms and conditions.
Hugging Face makes machine learning-based apps accessible through a web browser. The platform has become the go-to location for open source AI apps, including Dall-E Mini.
Clement Delangue, the CEO of Hugging Face, told TIME that his business is booming, and heralded what he said was a new era of computing with more and more tech companies realizing the possibilities that could be unlocked by pivoting to machine learning.
But the controversy over GPT-4chan was also a signal of a new, emerging challenge in the world of online safety. Social media, the last online revolution, made billionaires out of platforms’ CEOs, and also put them in the position of deciding what content is (and is not) acceptable online. Questionable decisions have tarnished those CEOs’ once glossy reputations. Now, smaller machine learning platforms like Hugging Face, with far fewer resources, are becoming a new kind of gatekeeper. As open-source machine learning tools like Dall-E and GPT-4chan proliferate online, it will be up to their hosts, platforms like Hugging Face, to set the limits of what is acceptable.
Delangue says this gatekeeping role is a challenge that Hugging Face is ready for. “We’re super excited because we think there is a lot of potential to have a positive impact on the world,” he says. “But that means not making the mistakes that a lot of the older players made, like the social networks – meaning thinking that technology is value neutral, and removing yourself from the ethical discussions.”
Still, like the early approach of social media CEOs, Delangue hints at a preference for light-touch content moderation. He says the site’s policy is currently to politely ask creators to fix their models, and will only remove them entirely as an “extreme” last resort.
But Hugging Face is also encouraging its creators to be transparent about their tools’ limitations and biases, informed by the latest research into AI harms. Mitchell, the former Google AI ethicist, now works at Hugging Face focusing on these issues. She’s helping the platform envision what a new content moderation paradigm for machine learning might look like.
“There’s an art there, obviously, as you try to balance open source and all these ideas around public sharing of really powerful technology, with what malicious actors can do and what misuse looks like,” says Mitchell, speaking in her capacity as an independent machine learning researcher rather than as a Hugging Face employee. She adds that part of her role is to “shape AI in a way that the worst actors, and the easily-foreseeable terrible scenarios, don’t end up happening.”
Mitchell imagines a worst-case scenario where a group of schoolchildren train a text-generator like GPT-4chan to bully a classmate via their texts, direct messages, and on Twitter, Facebook, and WhatsApp, to the point where the victim decides to end their own life. “There’s going to be a reckoning,” Mitchell says. “We know something like this is going to happen. It’s foreseeable. But there’s such a breathless fandom around AI and modern technologies that really sidesteps the serious issues that are going to emerge and are already emerging.”
The dangers of AI hype
That “breathless fandom” was encapsulated in yet another AI project that caused controversy this month. In early June, Google engineer Blake Lemoine claimed that one of the company’s chatbots, called LaMDA, based on the company’s synthetic-text generation software, had become sentient. Google rejected his claims and placed him on administrative leave. Around the same time, Ilya Sutskever, a senior executive at OpenAI suggested on Twitter that computer brains were beginning to mimic human ones. “Psychology should become more and more applicable to AI as it gets smarter,” he said.
In a statement, Google spokesperson Brian Gabriel said the company was “taking a restrained, careful approach with LaMDA to better consider valid concerns on fairness and factuality.” OpenAI declined to comment.
For some experts, the discussion over LaMDA’s supposed sentience was a distraction—at the worst possible time. Instead of arguing over whether the chatbot had feelings, they argued, AI’s most influential players should be rushing to educate people about the potential for such technology to do harm.
“This could be a moment to better educate the public as to what this technology is actually doing,” says Emily Bender, a linguistics professor at the University of Washington who studies machine learning technologies. “Or it could be a moment where more and more people get taken in, and go with the hype.” Bender adds that even the term “artificial intelligence” is a misnomer, because it is being used to describe technologies that are nowhere near “intelligent”—or indeed conscious.
Still, Bender says that image-generators like Dall-E Mini may have the capacity to teach the public about the limits of AI. It’s easier to fool people with a chatbot, because humans tend to look for meaning in language, no matter where it comes from, she says. Our eyes are harder to trick. The images Dall-E Mini churns out look weird and glitchy, and are certainly nowhere near photorealistic. “I don’t think anybody who is playing with Dall-E Mini believes that these images are actually a thing in the world that exists,” Bender says.
Despite the AI hype that big companies are stirring up, crude tools like Dall-E Mini show how far the technology has to go. When you type in “CEO,” Dall-E Mini spits out nine images of a white man in a suit. When you type in “woman,” the images all depict white women. The results reflect the biases in the data that both Dall-E Mini and OpenAI’s Dall-E were trained on: images scraped from the internet. That inevitably includes racist, sexist and other problematic stereotypes, as well as large quantities of porn and violence. Even when researchers painstakingly filter out the worst content, (as both Dayma and OpenAI say they have done,) more subtle biases inevitably remain.
Read more: Why Timnit Gebru Isn’t Waiting for Big Tech to Fix AI’s Problems
While the AI technology is impressive, these kinds of basic shortcomings still plague many areas of machine learning. And they are a central reason that Google and OpenAI are declining to release their image and text-generation tools publicly. “The big AI labs have a responsibility to cut it out with the hype and be very clear about what they’ve actually built,” Bender says. “And I’m seeing the opposite.”
More Must-Reads from TIME
- Where Trump 2.0 Will Differ From 1.0
- How Elon Musk Became a Kingmaker
- The Power—And Limits—of Peer Support
- The 100 Must-Read Books of 2024
- Column: If Optimism Feels Ridiculous Now, Try Hope
- The Future of Climate Action Is Trade Policy
- FX’s Say Nothing Is the Must-Watch Political Thriller of 2024
- Merle Bombardieri Is Helping People Make the Baby Decision
Write to Billy Perrigo at billy.perrigo@time.com