Amanda Askell’s nickname at Anthropic, the AI firm where she works, is the “Claude whisperer.”
Anthropic’s chatbot, Claude, has a reputation in the industry for being friendly, curious, and maybe a little more creative than its main rival ChatGPT. Askell is more responsible than anyone else for this thoughtfully engineered persona.
A philosopher by training, she leads the team at Anthropic that’s responsible for embedding Claude with certain personality traits and avoiding others. “It feels important to have a nuanced, rich conception of what it is to be good,” Askell says in an interview at Anthropic’s San Francisco headquarters.
In an effort to help Claude be “good,” Askell has tuned it to openly admit to users when it’s unsure of its answer, to attempt to discuss ideas without bias, and to avoid both-sidesism when discussing settled issues like climate change. Most importantly, she’s engineered Claude to tell people it doesn't have feelings, memory, or self-awareness—that any personality that it might display is the product of complex language processing rather than evidence of an inner life.
In spite of all these measures, or perhaps because of them, there’s still something seemingly human-like about Claude’s personality.
Some argue that making chatbots behave too anthropomorphically carries risks, including encouraging people to have inappropriate relationships with AI, or fostering inaccurate perceptions of how the technology actually works. But Askell is betting that in some scenarios, emulating human behavior can help users avoid falling into the dangerous trap of believing AI is all-knowing. “I was a bit worried about this idea that if you have this thing that feels robotic, that people might think of it as this authority,” she says, referring to the long-observed pattern of humans placing undue trust in machines. “The more you can signal that you’re talking with something that isn’t this grand source of authority on everything,” she says, it’s also more likely that people won't believe Claude’s outputs at face value. “It may feel more human-like—but that’s the line that you have to tread.”
*Disclosure: Investors in Anthropic include Salesforce, where TIME co-chair and owner Marc Benioff is CEO.
More Must-Reads from TIME
- Why Trump’s Message Worked on Latino Men
- What Trump’s Win Could Mean for Housing
- The 100 Must-Read Books of 2024
- Sleep Doctors Share the 1 Tip That’s Changed Their Lives
- Column: Let’s Bring Back Romance
- What It’s Like to Have Long COVID As a Kid
- FX’s Say Nothing Is the Must-Watch Political Thriller of 2024
- Merle Bombardieri Is Helping People Make the Baby Decision
Write to Billy Perrigo/San Francisco at billy.perrigo@time.com