
April 07, 2025
Researchers at Dartmouth found that an AI therapy bot improved depression symptoms by as much as 51% during a clinical trial.
The use of artificial intelligence to treat mental health has prompted concerns over the newness of the technology and its potential for harm. But a new study suggests AI can be just as effective as humans — if not more so.
A generative AI-powered therapy chatbot created by researchers at Dartmouth College can significantly reduce the symptoms of mental health disorders, according to the study published in the New England Journal of Medicine AI. More testing must be done before the chatbot, dubbed Therabot, ever hits the market, researchers said.
But it holds promise, they said.
"The improvements in symptoms we observed were comparable to what is reported for traditional outpatient therapy, suggesting this AI-assisted approach may offer clinically meaningful benefits," Nicholas Jacobson, an associate professor of biomedical data science and psychiatry at Dartmouth, said in a statement.
Dartmouth researchers have been developing Therabot since 2019, working in consultation with psychologists and psychiatrists. The study involved 106 people diagnosed with major depression, anxiety or eating disorders. Over eight weeks, they used Therabot through a smartphone app, responding to prompts about how they were feeling or initiating conversations when they wanted to talk.
The participants with depression, on average, saw a 51% reduction in symptoms, and reported improvements in mood and well-being. People with anxiety had a 31% average drop in symptoms, and people with eating disorders saw a 19% reduction in concerns about body image and weight.
On average, the participants interacted with Therabot for about six hours during the trial — which equates to about eight therapy sessions. Nearly 75% of them were not taking medications or receiving talk therapy treatment.
Jacobson said AI potentially can help close the treatment gap in mental health care, noting that there are about 1,600 patients with depression or anxiety for every mental health provider in the United States. A survey commissioned last year by the mental health care provider Thriveworks found that 87% of respondents saw therapy as beneficial, but only 23% were receiving mental health counseling. Jacobson said person-to-person therapy and AI-powered therapy can work in tandem to reach more patients.
But Jacobson and Michael Heinz, an assistant psychiatry professor at Dartmouth, stressed that the technology needs more work before it can effectively treat people on a large scale.
"Therabot is not limited to an office and can go anywhere a patient goes," Heinz said in a statement. "It was available around the clock for challenges that arose in daily life and could walk users through strategies to handle them in real time. But the feature that allows AI to be so effective is also what confers its risk — patients can say anything to it, and it can say anything back."
Last month, the American Psychological Association warned against using AI chatbots for mental health support after urging federal regulators to put safeguards in place to protect people from potential harm caused by AI chatbots like Character.AI and Replika.
Unlike trained therapists, the APA said, these chatbots tend to repeatedly affirm users — even when they say something harmful. Two lawsuits filed against Character.AI demonstrate this danger: After extensive use, one boy attacked his parents. Another boy died by suicide.
In January 2024, BBC reported that Character.ai, which simulates a conversation with a character, had 475 bots with the terms "therapy," "therapist," "psychiatrist" or "Psychologist" in their names. Some of the site's most popular characters were "Are you feeling OK?" and "Therapist." Those two characters had received 16.5 million and 12 million messages, respectively.
APA CEO Arthur Evans said there are various issues with AI-powered bots, including incorrect diagnoses, privacy violations and the exploitation of minors. He also noted that the bots also designed to keep users engaged so their data can be mined for profit.
"If this sector remains unregulated, I am deeply concerned about the unchecked spread of potentially harmful chatbots and the risks they pose — especially to vulnerable individuals," Evans said in a statement.
The APA was encouraged by the rigorous clinical training that the Dartmouth researchers invested in their Therabot, NPR reported.
"It is rooted in psychological science," Vaile Wright, director of the APA's Office of Health Care Innovation, told the outlet. "It is demonstrating some efficacy and safety, and it's been co-created by subject matter experts for the purposes of addressing mental health issues."
Philadelphia-based therapist Chris Moore said tools such AI-powered therapy apps can be useful to supplement, but not replace, human-to-human therapy.
"Sharing presence with other humans is a very unique and essential feeling that can lead to deep insight work and a profound healing capability," Moore said. "Doing this on one's own is a possibility, and technology like AI can help, but we are social creatures. Community and a sense of belonging are essential. Technology fueled by consumerism cannot replace grounded and vulnerable human connection, which is what I think apps like this are trying to do."
The Food and Drug has yet to approve an AI chatbot for mental health use, but some have made progress as tools in the field. In May 2022, chatbot therapy app Wysa received a breakthrough device designation that expedites the approval process after a clinical trial found that it was effective in treating chronic pain and accompanying anxiety and depression.