A version of this article was published in TIME’s newsletter Into the Metaverse. Subscribe for a weekly guide to the future of the Internet. You can find past issues of the newsletter here.
Think of your dream house. Maybe it has high, arching ceilings, a roaring fireplace and expansive windows that look out onto a placid lake. Or maybe it’s a breathing metallic dome that sits on a fiery planet and is filled with alien butlers. What if you could write a paragraph about those houses, and then immediately enter virtual versions of them and bring all your friends?
Thanks to recent developments in AI like ChatGPT and DALL-E, a future in which users will be able to create their own strange, immersive worlds is not far off. In the fall, three new text-to-3D generators were announced: GET3D from Nvidia, Make-a-Video from Meta and DreamFusion from Google.
And metaverse builders are already using text generators like ChatGPT—which responds to text prompts with startling poise and intelligence—and visual generators like DALL-E—which creates images out of text prompts—to ideate new worlds and designs. Metaverse industry insiders say that these AI technologies will be crucial toward building virtual worlds that are detail-rich and customizable—that they hold the key toward creating metaverses that regular people will actually want to spend time in.
“We’re able to fill the internet with interesting stuff because everybody is capable of taking a picture, recording a video, or writing words,” says Rev Lebaredian, VP for Omniverse and simulation technology at the chipmaker Nvidia. “If we are going to create a 3-D internet, then you absolutely have to have the people who are participating in it creating content as well—and the only hope we have of making that happen is if AI can help us.”
Changing workflows
While the use of AI tools in metaverse creation isn’t quite there yet, it is already playing a crucial, if slightly mundate role. ChatGPT, for example, is being used by metaverse builders to brainstorm ideas, write code, and compose texts of decks and emails.
In researching for this story, I emailed Sean Ellul, the co-founder of the 3D development studio Metaverse Architects, to ask him if he’s been impacted by ChatGPT. He responded with a well-written five-paragraph email about how he’s been using the technology. But there was a catch: in the fourth paragraph, the e-mail revealed that it had actually been written by ChatGPT itself. Ellul had punched in the following prompt into the service and then sent it over (with minimal editing):
“Write an email to Andrew, from TIME, about how at the company Metaverse Architects we are using chat GPT to brainstorm code, prepare articles and ideate new projects. We even use it to write emails, such as this one!”
The ensuing email was staid, yet completely believable and informative. It was proof of the baseline powers of ChatGPT and the ways in which Ellul has implemented it into his daily processes. Ellul says he uses ChatGPT to tweak design ideas, solicit marketing techniques, create architectural blueprints, and many tasks in between.
“I can’t stress how helpful this has been to us,” Ellul says.
When I ask Ellul, “How many times a day do you use ChatGPT?” He responds, “How many times a day do you search something on Google?” The AI has become completely integrated into his workflow and the way he receives information, he says.
Speaking characters and worlds into existence
AI services are having an outsize impact on metaverse creation in particular. Virtual worlds must be filled with rich detail in their landscapes, objects and architecture. AI can use prompts to build those surroundings much faster than a human can do by hand. Ellul compares the process to an enhanced version of building a Pinterest mood board: you search for an aesthetic or design, hone in on an iteration of that you like in particular, and slowly build out an overarching vision.
So without needing a background in design or architecture, AI users will eventually be able to create virtual 3D environments: first rooms, then buildings, then entire worlds. “AI will play a major part in being able to have these generative experiences that can be created on the fly according to the needs, interests and desires of the user,” Ellul says.
Metaverses will also be filled with virtual people, just like video games are filled with non-player characters (NPCs). But whereas NPCs had pre-set scripts, AI characters will be able to respond to you organically, just like ChatGPT does to your queries.
These sorts of AI characters already exist in text form: the website character.ai, for example, allows you to have conversations with AI versions of Elon Musk, Socrates and Billie Eilish. It’s easy to imagine this type of AI then placed into a virtual world, where they can serve as tour guides (like an evolution of Microsoft’s Clippy), quest companions or nemeses.
Ellul says that his company is now generating new characters for games much faster than before. For example, he shows me the image of a woman whose features almost appear photographic. He’s created her by blending nine different humans on the text-to-image AI generator Midjourney; by tweaking the algorithm bit by bit. Once he has a face he’s satisfied with, he goes over to ChatGPT and types in: “generate a backstory for a 24-year-old woman from a tiny village in a fantasy setting who works at an inn.” Within seconds, the AI has created an entire history of Elspeth, a woman who “worked as a farmhand, a blacksmith’s apprentice, and even as a barmaid in a bustling city town. But no matter where she went, she always felt like something was missing.” (All in all, fairly generic, but not a bad starting point for a fairy tale.)
To create Elspeth as a fully-fleshed out 3D figure that moves fluidly and conforms to physics, however, requires tools that are still being developed. Companies like Nvidia and Google have announced text-to-3D models that are still in their early stages of existence, but not quite ready for widespread public consumption. Researchers from Nvidia, for example, announced Magic3D in November, which is an AI that can create 3D models from prompts like “A blue poison-dart frog sitting on a water lily.” But that still takes 40 minutes and is limited in the amount of data about the frog itself.
Lebaredian says that at the moment, Nvidia is training AIs in its 3D simulation engine Omniverse, so that they learn rapidly how to label and build things, move through spaces, and respond to user queries. When asked how soon a regular user would be able to build the lake house I dreamed up above, he says that progress in AI doesn’t move linearly: that there are lulls and explosions that make it difficult to predict. “We’re pushing the limits on this, and it’s happening very fast,” he says. “I would say ten years from now, I’m fairly certain that most people are going to be able to create high quality 3D content just by talking to the computer. And I’m bullish that it’s going to happen much earlier.”
Danger signs
There are many frightening aspects to this rapid development, including the idea of inhabiting immersive spaces in which you don’t know if you’re talking to a real person or a machine. It’s easy to imagine people getting catfished much more easily, taking bad advice from AIs or developing emotional dependencies on them. Ellul acknowledges that there are serious risks in this area that need to be considered carefully. “Deepfakes are probably going to hit a whole other level now,” he says. “This is where potential misuse starts: We’re starting to realize that you can just create identities on the fly.”
When my colleague Billy Perrigo asked ChatGPT directly about the dangers of its human-like abilities, the AI responded: “It’s important to remember that we are not human, and we should not be treated as such. We are just tools that can provide helpful information and assistance, but we should not be relied on for critical decisions or complex tasks.”
AIs harbor all sorts of biases based on the data they have trained upon, and have produced a slew of toxic content, including misinformation and hate speech. And a recent TIME investigation found that OpenAI, the company that created ChatGPT and DALL-E, used outsourced Kenyan workers who were paid less than $2 an hour to review toxic content in order to train their algorithms.
Read More: OpenAI Used Kenyan Workers on Less Than $2 Per Hour to Make ChatGPT Less Toxic
Still, AI companies are barrelling full steam ahead, buoyed by billions of dollars in investments. OpenAI, which is receiving a multi-billion dollar investment from Microsoft, is developing its next-generation AI called GPT-4, which could be released as soon as this quarter and could include both image and text generation, the New York Times reports. As OpenAI’s tools gain more and more usage, the ways in which the company itself deals with ethical issues will become more and more important, in the same way that Facebook’s speech policies placed a massive role in global politics over the last decade.
“I don’t disagree that more effective technology would lead to bad players being more effective,” Ellul says. “The more connected our world becomes, the easier it is for bad and good actors to have an influence.”
It remains to be seen whether AI technologies will be a tool of ultimate empowerment—turning idea-generators into virtual creators and writers into engineers—or a vehicle for identity theft and abuse. “It’s important to really educate ourselves,” Ellul says, “So we don’t either stifle these technologies when it comes to the benefits they can give us, or enable them too far when it comes to potential risks.”
More Must-Reads from TIME
- Introducing the 2024 TIME100 Next
- Sabrina Carpenter Has Waited Her Whole Life for This
- What Lies Ahead for the Middle East
- Why It's So Hard to Quit Vaping
- Jeremy Strong on Taking a Risk With a New Film About Trump
- Our Guide to Voting in the 2024 Election
- The 10 Races That Will Determine Control of the Senate
- Column: How My Shame Became My Strength
Contact us at letters@time.com