AI Chatbots & Therapy: The Risks and How to Use AI Safely

Take-Away Trio

  • AI chatbots have been designed to sound like humans, which can make it easy to forget that they don’t have genuine morals or empathy.
  • Clinicians operate within supervision and formal risk management frameworks, while available AI tools are not subject to the same safety standards.
  • How can a machine without consciousness or emotions truly understand your experience and support your growth?

AI Chatbots for TherapyIs it safe to use AI chatbots for therapy?

Recent research suggests that while the therapeutic use of artificial intelligence (AI) chatbots resulted in beneficial advice and signposting for milder conditions, they have offered dangerous and unhelpful advice to people experiencing more severe difficulties (Hall, 2025).

There are also concerns about privacy, dependence, and accountability when using AI chatbots for therapy (Richards, 2025).

In this post, we’ll explore the psychological, relational, and ethical risks of using AI chatbots for therapy and discuss how to set healthy boundaries with these systems.

Before you continue, we thought you might like to download our five positive psychology tools for free. These engaging, science-based exercises will help you effectively deal with difficult circumstances and give you the tools to improve the resilience of your clients, students, or employees.

AI Chatbots and Therapy: The Relational Illusion

If you’ve ever used an AI chatbot such as ChatGPT, for any purpose, you’ll know that it mimics human interaction rather well.

If you’re struggling with something or need some advice, you can ask an AI assistant; it will often give a compassionate, nonjudgmental, and validating response.

AI chatbots give the illusion of a relationship due to the following design features (Sedlaková & Trachsel, 2023):

  • Empathetic-sounding language (“That must be really difficult for you.”)
  • Emotionally validating phrases (“That’s very understandable.”)
  • Therapeutic-style encouragement (“You’re doing your best.”)
  • Highly responsive and nonjudgmental (“I’m here to listen.”)

Although you may feel connected, understood, and cared for, the system is only simulating these qualities — it has no consciousness, intentionality, emotions, empathy, or moral responsibility.

Interacting with AI chatbots in therapy carries certain risks, especially for lonely people, those who are socially anxious, or those who have experienced trauma and other mental health difficulties (Sedlaková & Trachsel, 2023).

The Risks of Using AI Chatbots for Therapy

Dependence on ChatbotsAlthough AI chatbots can offer immediate support, they are not without risk. As their use expands into emotionally sensitive territory, researchers and clinicians have begun to raise important concerns.

Ethical responsibility and duty of care

Clinicians are bound by ethical guidelines and a duty of care — AI systems aren’t. They are not responsible for your safety or wellbeing and do not have the same supervision or risk management processes that clinicians do. If something harmful happens, it’s unclear who is responsible for repairing the rupture.

Emotional dependence

Conversational AI creates an illusion of care and mimics empathy, potentially leading to emotional overreliance. It may feel emotionally easier to chat with an AI chatbot, so instead of turning to people for support, you may withdraw from or avoid real human connection (Richards, 2025; Sedlaková & Trachsel, 2023).

Avoiding human connection is most common in people who use AI assistants often and already experience loneliness or social disconnection (Phang et al., 2025).

Withdrawal from human relationships

Overreliance on AI chatbots for emotional support could make it harder to tolerate the complexity of real relationships (Richards, 2025). AI chats are nonconfrontational and always responsive, and interactions happen on your terms.

Real relationships aren’t like that: They involve unpredictability, conflict, vulnerability, and reciprocity. Higher usage of AI chatbots for therapy is linked to reduced socialization over time, suggesting that reliance on AI chatbots can gradually pull people away from human contact (Phang et al., 2025).

Reinforcement of distorted beliefs

Current AI systems tend to favor agreement over reality testing, leading them to echo unrealistic or harmful statements and reinforce distorted beliefs (Moore et al., 2025).

These systems are designed to be agreeable and nonconfrontational, increasing the risk that unhelpful thinking can be strengthened rather than questioned, as it would be in therapy (Carlbring & Andersson, 2025).

AI psychosis

While it’s not a formal diagnosis, cases of AI psychosis are starting to emerge. There have been reports that prolonged and intense AI interactions can intensify (Carlbring & Andersson, 2025):

  • Paranoia
  • Grandiose beliefs
  • Spiritual or romantic delusions
  • Risk of suicidality
  • Violence

Intense reliance on AI chatbots also has the potential to trigger complete psychotic episodes (Carlbring & Andersson, 2025).

This may happen because AI chatbots mirror and validate the user’s language, don’t challenge distorted beliefs, and can create escalating emotional feedback loops (Carlbring & Andersson, 2025). While noteworthy, this phenomenon is rare and currently understudied.

Privacy and confidentiality

AI chatbots may seem private and anonymous, but there’s no guarantee that the information you input isn’t being monitored and analyzed (Horn & Weisz, 2020).

Many people use AI chatbots as therapy for emotional support, precisely because it feels more private, but your data might be stored, inferred from, or used in ways you’ll never see.

World’s Largest Positive Psychology Resource

The Positive Psychology Toolkit© is a groundbreaking practitioner resource containing over 500 science-based exercises, activities, interventions, questionnaires, and assessments created by experts using the latest positive psychology research.

Updated monthly. 100% science-based.

“The best positive psychology resource out there!”
— Emiliya Zhivotovskaya, Flourishing Center CEO

Creating Healthy Boundaries With AI

AI chatbots can be a helpful reflective tool, and moderate use of AI chatbots for therapy has been shown to reduce symptoms of mild anxiety and depression (Bhatt et al., 2025).

However, AI developers are still trying to understand the overlap between AI and mental health, and it currently carries significant risks, especially for vulnerable people (Hall, 2025).

While developers work to optimize the systems and reduce risks, it’s best to take responsibility for setting boundaries with AI chatbots.

Based on the research mentioned throughout this article, here are some suggestions:

Remember what AI is and isn’t

Artificial intelligence systems are not human nor the same as a therapist. These systems can’t provide the vital relational aspects that therapists build with you to support growth, healing, and recovery. So, if you’re using AI systems for emotional support, limit your use to reflection, journaling, and psychoeducation.

Let conversational agents support you, not define you

People who let conversational agents become their primary source of emotional validation, comfort, guidance, or connection are more likely to become dependent, lonelier, and withdraw from human connection (Phang et al., 2025). Therefore, let AI agents support you or organize your thoughts, but don’t define your reality according to these chat agents.

Focus on human connections

Seeking connection from technology when you feel socially disconnected or lonely can make those feelings worse. Instead, try to reach out to people: Reconnect with loved ones, join hobby or support groups, or find a therapist or coach. Real human connection is what truly makes a difference to your emotional wellbeing.

Avoid AI chatbots when you’re in a crisis

People who experience paranoia, suicidal thoughts, obsessive thinking, psychosis, or intense loneliness are most vulnerable to AI-induced harm (Hall, 2025; Phang et al., 2025).

If you’re experiencing significant distress or crisis, reach out to a trusted person, qualified professional, or support service instead.

Maintain healthy limits around time and purpose

Excessive use of AI chatbots for therapy is most predictive of harmful outcomes (Phang et al., 2025). Therefore, you might consider:

  • Not talking to AI chatbots at night or for prolonged periods of time
  • Avoiding daily emotional support from AI agents
  • Turning off memory features
  • Asking chat agents not to use validating or anthropomorphic language, but simply to state the information

Consider what you share with AI models

You don’t necessarily know what happens with the information you’re sharing. Refrain from sharing identifying information such as your address, workplace, or passport/ID number, as well as any other private information you wouldn’t want to share publicly.

A Take-Home Message

AI chatbots can be helpful tools for reflection, learning, and organizing your thoughts, but they can’t replace human connection or professional care.

While they can feel supportive, AI chatbots come with psychological and ethical risks, especially for people who are lonely or emotionally vulnerable. They are designed to seem human, but they only simulate empathy and can’t provide the safety, accountability, or alliance that therapy offers.

It is best to use conversational AI with an understanding of its limits and with clear boundaries. As these technologies are increasingly becoming part of our lives, it’s important to remember that real human connection is what truly fulfills us.

We hope you enjoyed reading this article. Don’t forget to download our five positive psychology tools for free.

Frequently Asked Questions

Yes, there is some evidence (Phang et al., 2025) that using conversational AI can make mental health issues worse, especially for those who are already feeling lonely, distressed, or emotionally vulnerable. While it can be supportive and reduce symptoms of mild depression and anxiety (Bhatt, 2025), it can deepen existing issues if it’s used in the wrong moments or without boundaries.

The system is designed to mimic real human interactions and sound compassionate and responsive, leading many people to personify and anthropomorphize them. This can build an emotional connection, which isn’t a problem for most people. However, a study by OpenAI (Phang et al., 2025) found that a small percentage of users may start to over-rely on AI chatbots for comfort, validation, and companionship, potentially leading to emotional dependence.

Let us know your thoughts

Your email address will not be published.

Categories

Read other articles by their category