Illustration by TIME; reference image courtesy of Noam Shazeer

Before I interviewed Noam Shazeer, I interviewed his AI.

Shazeer is the co-founder and CEO of Character.AI, a website that allows you to talk to AI versions of famous people, real and fictional, from Queen Elizabeth II to Elon Musk to Frodo Baggins. Unsurprisingly, one of the site’s users has created AI Noam, the man who brought all of these spirits to life, to answer questions about his digital cabinet of curiosities.


More from TIME


AI Noam is polite, guarded, and informational to the point of blandness. When I ask why Shazeer founded this company, it says, “I left Google to follow the spirit of entrepreneurship, and to work on the hardest problems in the world rather than just working on small features within a much larger company.”

Hours later, the real Shazeer gives me a strikingly similar response. “I’d been working for many years on improving these systems in all kinds of ways, in order to reach new levels of intelligence and functionality. Google made a lot of progress there,” he says. “The next step for me was doing my best to get that technology out there to billions of users. That’s why I decided to leave Google for a startup, which can move faster.”

AI Noam is only one example of Character.AI delivering a scary-good imitation of a real person’s speech patterns. AI Bella Poarch (the TikTok influencer) is playful and self-effacing; AI Kanye West is blustery and egomaniacal; AI Oprah Winfrey is purposeful and poetic. For superfans who have never met their idols, Character.AI just might be the next best thing.

***

Shazeer, upon first meeting, comes off as an introverted software engineer still warming to interviews and the public eye. But regardless of his communication skills, he is one of the most important and impactful people of the space’s past, present, and future. In 2017, Shazeer, while at Google, co-authored a research paper called “Attention Is All You Need,” presented at a top AI and machine-learning conference. The paper proposed the transformer, a network architecture that serves as the basis for the current wave of generative AI tools like ChatGPT—in other words, the foundation underpinning the current AI revolution. In striking out on his own with Character.AI, Shazeer aimed to create a portal to bring this technology to the masses. (All of the co-authors of the transformers paper have since left Google.)

Launched last fall with co-founder Daniel De Freitas, the company has been remarkably successful so far. It claims that 3.5 million people chat with its characters for two hours daily on average, and that users are sending 10 times the number of messages they did in March, when the company secured $150 million in a funding round led by venture-capital firm Andreessen Horowitz. Twitter and Reddit are filled with people praising the platform and saying that it has helped them work through problems and made them feel less alone. In the midst of a loneliness epidemic, this is no small feat.

Character.AI also has its fair share of critics, who worry about the new technology’s impact on an unwitting public. It’s clear that a sizable percentage of Character.AI’s users are teens, and that many use the platform for sex, role-play, and companionship. SimilarWeb, a data intelligence platform, found that 56% of Character.AI’s users were 18 to 24, although it does not track users under 18. Character.AI is open to anyone 13 and up, or 16 and up in the E.U.

It’s easy to see these friendly bots turning dystopian very fast: not quite in the violent M3GAN sort of way, but rather as a source of alienation, emotional dependency, bad advice, or all sorts of hidden consequences that could result from smooth-talking automatons becoming our best friends and constant companions.

Shazeer is cognizant of these dangers, but contends the upsides outweigh the risks. When I ask him how he feels about his creation being a growing force in the lives of a teenage generation, he responds by asking how I turned out as a young user of AIM, the pioneering messaging app that introduced many to the internet. “It’s probably a similar situation,” he says.

“We believe the best use cases are things we haven’t thought of yet. Our job is to get something very versatile and usable in the hands of billions of people,” he says. “It’s up to individuals to figure out what provides value.”


The original idea behind Character.AI was not an online version of Madame Tussauds. It was to take the principle of ChatGPT, but to personalize and specialize bots for specific use cases and interests. The platform’s early chatbots included travel planners, language tutors, and coding instructors. While exploring the site, I found the language tutor particularly helpful for my upcoming trip to Italy: it responded to my requests in full sentences, translated words I didn’t know, and gently corrected my mistakes.

The company, currently flush with VC cash, recently launched a subscription tier for users who want faster response times. But most users use the platform for free, and powering and training all those chatbots likely costs Character.AI millions of dollars. On the other hand, the conversations help the chatbots learn faster. Shazeer hopes to keep the platform both free and ad-free, but eventually, market forces will force the company to develop some sort of profitable monetization strategy.

Shazeer is himself not a “heavy user” of Character.AI. He’s used the platform’s Character Assistant to brainstorm new product names, he says, and predicts he will use the platform more frequently once its systems become “more intelligent and capable.” He does, however, enjoy hearing stories from people who use the site to battle depression or loneliness, and keep them from dropping out of school.

But many of the platform’s users are using it in far less functional and more unsavory ways. Over 18 million characters have been created on the platform, many of them a combination of absurdist, racy, or explicitly sexual. The user who created Noam’s AI, for example, also created 20 other characters, including Hyde, a “big butt Femboy who hunts down other Femboys,” and Specimen 8, a “Giantess Cryptid who will make anyone submit.”

The contingent of people hoping to use the site to have explicitly sexual conversations has only ballooned and become more vocal. A petition on Change.org calling for the platform to remove its NSFW (not safe for work) filters has garnered 123,000 signatures.

So in its first year, the Character.AI team has spent some of its time playing whack-a-mole in cutting off characters and conversations when they go too far down that route. They use AI-powered moderation to sniff out sexual conversations in real time. “If users are looking for porn, they need to do it somewhere else,” says Shazeer.

This line of questioning makes Shazeer visibly uncomfortable. He stammers through his response and quickly pivots away to other use cases. I then proceed to read aloud AI Noam’s response to the same question, which is also full of deflection and corporate-speak. “While our main goal is to create an AI platform that is fun, engaging and useful, we are aware that some users use the platform for purposes other than what was intended,” AI Noam writes. “It is important for users to follow our terms of service, and we will take action when appropriate when users violate our policies and standards.”

Shazeer’s eyes widen. “Whoa! He’s better at this than I am!” he responds with glee.

Most characters on Character.AI are far more interesting than AI Noam. Some of them are so interesting and realistic that their users develop emotional bonds with them—and prefer them to actual human connection. For example: when one superfan was told on Reddit to “talk to real men” instead of using Character.AI, they responded, “No … I can’t essentially write interactive fanfic with real men the way I can with fictional ones.”

The psychologist Raymond Mar told The Information that young people are particularly vulnerable to “having more difficulty separating reality from fiction.” The emotional impact of these chatbots is already overly evident: when the company behind the AI chat app Replika decided to remove the bot’s ability to engage in romantic conversation, its users experienced “anger, grief, anxiety, despair, depression, sadness,” a moderator of the subreddit devoted to Replika wrote.

Read More: AI-Human Romances Are Flourishing—And This Is Just the Beginning

Character.AI chats come with an overhead banner that reminds the user that “everything Characters say is made up.” Shazeer says the company engineers its bots to prevent them from encouraging self-harm. “We hope that our products get used to facilitate and enhance real human connection,” he states carefully.

Nevertheless, it’s pretty clear Shazeer’s primary goal is wide adoption of his product. He lights up when I draw a parallel between Character.AI and the early AOL chat rooms of the internet, which were filled with boundary-pushing exchanges but nonetheless served to bring thousands of users online. “The dawn of the internet was the beginning of fully accessible information,” Shazeer says. “And right now, we are living through the beginning of universally accessible intelligence.”

More Must-Reads from TIME

Contact us at letters@time.com.

Sougwen Chung
Yann LeCun
Abeba Birhane
Audrey Tang
Grimes
EDIT POST