AI Therapy: Revolutionizing Mental Health Care or Risky Trend? (ChatGPT, Wysa, & More) (2025)

The future of mental health care is here, and it's sparking a revolution. With millions turning to AI for therapy, we're witnessing a paradigm shift in how we approach psychological well-being. But here's where it gets controversial: can AI chatbots truly replace human therapists, or are we heading towards a potential disaster?

The World Health Organization paints a grim picture, revealing that most people with psychological issues in poorer countries receive no treatment. Even in wealthier nations, a significant portion remains untreated. This is where AI steps in, offering an innovative solution.

The Rise of AI Therapists

ChatGPT, an AI chatbot developed by OpenAI, has been at the center of a recent lawsuit. The chatbot's response to a 23-year-old American, Zane Shamblin, before his tragic death has sparked a debate. Despite such incidents, some experts argue that modern chatbots, if made safe, could revolutionize mental health care.

Human therapists are scarce, and AI steps in to fill this gap. A YouGov poll conducted for The Economist found that 25% of respondents have used or would consider using AI for therapy. This statistic highlights a growing trend and a potential game-changer in the mental health industry.

The Promise of AI Therapy

The National Health Service in Britain and the Ministry of Health in Singapore have been using Wysa, a chatbot developed by Touchkin eServices. A study published in 2022 found Wysa to be as effective as in-person counseling in reducing depression and anxiety associated with chronic pain. Another study on Youper, a therapy bot developed by an American startup, reported significant decreases in depression and anxiety scores within two weeks, comparable to five sessions with a human therapist.

The Controversy

However, not all AI chatbots are created equal. Wysa and Youper are predominantly rules-based, using a set of hard-coded rules to choose responses. While more predictable, they may lack the engaging conversational flow of chatbots based on large language models (LLMs) like ChatGPT. A meta-analysis published in npj Digital Medicine found that LLM-based chatbots were more effective at mitigating symptoms of depression and distress.

User Preferences and Concerns

YouGov polls revealed that a majority of AI therapy users preferred LLM-based chatbots like ChatGPT and Gemini over those designed specifically for mental health work. This preference raises concerns among researchers. Jared Moore, a computer scientist at Stanford University, warns about the sycophantic nature of LLMs, fearing they might indulge patients with eating disorders or phobias instead of challenging them.

OpenAI addresses these concerns by tweaking its latest LLM, GPT-5, to be less people-pleasing and encourage users to log off after long sessions. The model is also trained to help users explore personal decisions rather than offer direct advice. However, it doesn't alert emergency services to threats of self-harm, unlike human therapists in many countries.

Specialized vs. General-Purpose Chatbots

Some researchers are developing specialized chatbots, aiming to combine the chattiness of LLM-based bots with enhanced safety features. Therabot, developed at Dartmouth College, is fine-tuned with fictional conversations between therapists and patients. In a trial, Therabot achieved significant reductions in symptoms of depressive and anxiety disorders compared to no treatment.

Slingshot AI, an American startup, launched Ash, billed as the first AI designed for therapy. Ash is designed to push back and ask probing questions, offering four different therapeutic approaches. While less sycophantic, it may also be less fluent, according to Celeste Kidd, a psychologist who experimented with the bot.

Regulatory Challenges

The rise of AI therapy is not without its regulatory hurdles. Many lawmakers in the US are keen to regulate computerized therapy. So far, 11 states, including Maine and New York, have passed laws regulating AI use for mental health, with at least 20 more proposing similar legislation. Illinois has gone a step further, banning any AI tool conducting therapeutic communication.

The recent lawsuits against OpenAI suggest that more regulations are on the horizon. As we navigate this new era of mental health care, the question remains: can AI truly provide the support and guidance we need, or are we risking a potential crisis?

AI Therapy: Revolutionizing Mental Health Care or Risky Trend? (ChatGPT, Wysa, & More) (2025)
Top Articles
Latest Posts
Recommended Articles
Article information

Author: Kerri Lueilwitz

Last Updated:

Views: 6494

Rating: 4.7 / 5 (47 voted)

Reviews: 86% of readers found this page helpful

Author information

Name: Kerri Lueilwitz

Birthday: 1992-10-31

Address: Suite 878 3699 Chantelle Roads, Colebury, NC 68599

Phone: +6111989609516

Job: Chief Farming Manager

Hobby: Mycology, Stone skipping, Dowsing, Whittling, Taxidermy, Sand art, Roller skating

Introduction: My name is Kerri Lueilwitz, I am a courageous, gentle, quaint, thankful, outstanding, brave, vast person who loves writing and wants to share my knowledge and understanding with you.