This morning, my wellness coach reminded me to savor each exciting moment in my day, but to always take time to rest and recharge. It’s good advice—good enough that I would believe it came from a human, instead of an artificial intelligence algorithm anthropomorphized as a cartoon panda.
My panda-shaped algorithm lives in the Earkick app. Each day, I can use Earkick to describe my mood through writing, voice note, or video. Roughly 20 seconds later, its algorithm has analyzed my statement for signs of anxiety and spit out a personalized, conversational recommendation for how I can feel my best.
Earkick is one of a small army of chatbots, Woebot perhaps best-known among them, that promise to use the power of AI to support mental wellness. Describe a problem to one of these chatbots and it can answer with what feels like empathy, offering suggestions or asking follow-up questions just as a human clinician would—and with a pretty good rate of success, according to research on the topic. Early evidence suggests chatbots can deliver elements of cognitive behavioral therapy and other mental-health tools well enough to reduce symptoms of depression and stress at least a little, and Earkick’s data find that people who use the app for about five months report a 34% improvement in mood and 32% reduction in anxiety. In one poll, 80% of people who’d used ChatGPT for mental-health advice found it a good alternative to regular therapy.
But is it really? Therapy, after all, is a practice traditionally built on human interaction, on trust and intimacy and emotional intelligence. Studies repeatedly show that the relationship between therapist and client is one of the best predictors for success in treatment, which means it’s “critical that the patient feels a sense of trust with the therapist, that they experience the therapist as being warm and understanding and empathic, and that they feel the therapist is someone they can talk to,” says David Tolin, an adjunct professor of psychiatry at Yale University School of Medicine and a past president of the Association for Behavioral and Cognitive Therapies.
There is research to suggest that people can develop connections with “conversational agents” like chatbots. And with access to traditional providers hugely inadequate, there are clear potential benefits to relying on them as substitutes. But can AI truly duplicate the experience of talking and growing close to a human therapist—and should it?
“I say this in part as a practicing therapist,” Tolin says. “There is something I would find a little sad if we eventually replaced the human connection with a computer connection.”
To a much greater extent than medical specialties built on biomarkers and test results, mental-health care relies on the subjective: how a patient describes their symptoms, how their clinician perceives them, internal shifts and breakthroughs that can’t be easily measured with numbers. In some ways, this means the field is crying out for AI, with its ability to find patterns and meaning in huge swaths of information that humans can’t easily parse. (Indeed, preliminary research suggests AI could help doctors pick the right antidepressant for a particular patient, or study their speech or writing for signs of mental distress.) But the ineffability of therapy also makes it difficult to duplicate.
Traditional therapy is not perfect by any means, but by some estimates, about three-quarters of people who try it see some improvement. It’s not always clear why it works, though. The “Dodo Bird Verdict,” a long-standing but controversial theory, proposes that different styles of therapy are more or less equal in terms of efficacy, which suggests psychological methods alone aren't what helps patients. Instead, the benefits of therapy may come, in part, from a difficult-to-quantify mixture of factors including the strength of the therapeutic relationship, the act of consciously carving out time and space for mental well-being, or simply knowing a person is listening to you when you talk, says J.P. Grodniewicz, a philosopher who has researched and written about limitations of AI in therapy.
“Maybe psychotherapy is not really about a particular technique,” he says. “Maybe it’s about co-creating a context in which someone might be growing as a person, exploring themselves, maybe facing existential fears, having someone with whom they can speak about [difficult topics].”
Without being able to clearly define the ingredients in that cocktail and how they come together to improve mental health, it’s difficult—if not impossible—to train an algorithm to duplicate the experience, Grodniewicz says.
Peter Foltz, a machine-learning researcher at the University of Colorado, Boulder, agrees the lack of hard data in mental-health care presents challenges. An algorithm, after all, is only as good as the data it’s trained on.
More From TIME
“What you really want to be able to do is tie the characterizations made by AI to some particular kinds of evidence,” Foltz says. “And in mental health, really what we’re looking at is some kind of neuropsychological change in the brain or the mind...and there needs to be a lot more research to be very clear about what we’re measuring.”
And yet, when looking at things that do lend themselves to measurement—like how people self-report their symptoms—preliminary studies show that chatbots can improve patients’ depression, anxiety, and other issues. Some studies also suggest that processing trauma and emotions through writing is an effective coping strategy, which implies a self-guided mental-health app could be beneficial even if it doesn’t perfectly duplicate the experience of lying on a therapist’s couch.
“The ultimate question is whether a treatment works,” Tolin says. “If it does, then we’re happy." More research is required to confirm that AI-assisted therapy actually works, Tolin says, and especially to determine whether it can be used on its own or only in conjunction with a traditional provider. But if studies consistently show that it's effective, it may be more important to know that it works than to know exactly how it works.
In the meantime, however, there’s another big question to answer: “If we did develop a perfect artificial-intelligence therapist,” Tolin says, “would anybody want to see it?”
So far, it seems most people would not. Recent polls have found that only 20% to 25% of U.S. adults are comfortable with the idea of AI-assisted mental health care, and less than 40% think AI will “help more than it hurts” in the medical field.
Tolin isn’t terribly surprised by that resistance. Humans crave connection, and they’re already not getting enough of it. Loneliness is considered an epidemic in the U.S., and less than 40% of U.S. adults say they feel “very connected” to other people, according to Gallup data. It may be possible to develop a connection to an app or chatbot, but Tolin doubts whether it would be a fulfilling alternative.
“If I told you that I was going to replace your best friend with a computer, you probably would be unhappy,” Tolin says. “There would be something deeply unsatisfying about that, because it’s not a person. I think the same principles may apply to a therapist as well.”
That points to a potentially larger hurdle for the field to overcome. An algorithm will never be a human—no matter how convincingly it mimics one.
If you or someone you know may be experiencing a mental-health crisis or contemplating suicide, call or text 988. In emergencies, call 911, or seek care from a local hospital or mental health provider.
More Must-Reads from TIME
- Why Trump’s Message Worked on Latino Men
- What Trump’s Win Could Mean for Housing
- The 100 Must-Read Books of 2024
- Sleep Doctors Share the 1 Tip That’s Changed Their Lives
- Column: Let’s Bring Back Romance
- What It’s Like to Have Long COVID As a Kid
- FX’s Say Nothing Is the Must-Watch Political Thriller of 2024
- Merle Bombardieri Is Helping People Make the Baby Decision
Write to Jamie Ducharme at jamie.ducharme@time.com