AI chatbots have been designed to sound like humans, which can make it easy to forget that they don’t have genuine morals or empathy.
Clinicians operate within supervision and formal risk management frameworks, while available AI tools are not subject to the same safety standards.
How can a machine without consciousness or emotions truly understand your experience and support your growth?
Is it safe to use AI chatbots for therapy?
Recent research suggests that while the therapeutic use of artificial intelligence (AI) chatbots resulted in beneficial advice and signposting for milder conditions, they have offered dangerous and unhelpful advice to people experiencing more severe difficulties (Hall, 2025).
There are also concerns about privacy, dependence, and accountability when using AI chatbots for therapy (Richards, 2025).
In this post, we’ll explore the psychological, relational, and ethical risks of using AI chatbots for therapy and discuss how to set healthy boundaries with these systems.
Before you continue, we thought you might like to download our five positive psychology tools for free. These engaging, science-based exercises will help you effectively deal with difficult circumstances and give you the tools to improve the resilience of your clients, students, or employees.
If you’ve ever used an AI chatbot such as ChatGPT, for any purpose, you’ll know that it mimics human interaction rather well.
If you’re struggling with something or need some advice, you can ask an AI assistant; it will often give a compassionate, nonjudgmental, and validating response.
AI chatbots give the illusion of a relationship due to the following design features (Sedlaková & Trachsel, 2023):
Empathetic-sounding language (“That must be really difficult for you.”)
Emotionally validating phrases (“That’s very understandable.”)
Therapeutic-style encouragement (“You’re doing your best.”)
Highly responsive and nonjudgmental (“I’m here to listen.”)
Although you may feel connected, understood, and cared for, the system is only simulating these qualities — it has no consciousness, intentionality, emotions, empathy, or moral responsibility.
Interacting with AI chatbots in therapy carries certain risks, especially for lonely people, those who are socially anxious, or those who have experienced trauma and other mental health difficulties (Sedlaková & Trachsel, 2023).
The Risks of Using AI Chatbots for Therapy
Although AI chatbots can offer immediate support, they are not without risk. As their use expands into emotionally sensitive territory, researchers and clinicians have begun to raise important concerns.
Ethical responsibility and duty of care
Clinicians are bound by ethical guidelines and a duty of care — AI systems aren’t. They are not responsible for your safety or wellbeing and do not have the same supervision or risk management processes that clinicians do. If something harmful happens, it’s unclear who is responsible for repairing the rupture.
Emotional dependence
Conversational AI creates an illusion of care and mimics empathy, potentially leading to emotional overreliance. It may feel emotionally easier to chat with an AI chatbot, so instead of turning to people for support, you may withdraw from or avoid real human connection (Richards, 2025; Sedlaková & Trachsel, 2023).
Avoiding human connection is most common in people who use AI assistants often and already experience loneliness or social disconnection (Phang et al., 2025).
Withdrawal from human relationships
Overreliance on AI chatbots for emotional support could make it harder to tolerate the complexity of real relationships (Richards, 2025). AI chats are nonconfrontational and always responsive, and interactions happen on your terms.
Real relationships aren’t like that: They involve unpredictability, conflict, vulnerability, and reciprocity. Higher usage of AI chatbots for therapy is linked to reduced socialization over time, suggesting that reliance on AI chatbots can gradually pull people away from human contact (Phang et al., 2025).
Reinforcement of distorted beliefs
Current AI systems tend to favor agreement over reality testing, leading them to echo unrealistic or harmful statements and reinforce distorted beliefs (Moore et al., 2025).
These systems are designed to be agreeable and nonconfrontational, increasing the risk that unhelpful thinking can be strengthened rather than questioned, as it would be in therapy (Carlbring & Andersson, 2025).
AI psychosis
While it’s not a formal diagnosis, cases of AI psychosis are starting to emerge. There have been reports that prolonged and intense AI interactions can intensify (Carlbring & Andersson, 2025):
Paranoia
Grandiose beliefs
Spiritual or romantic delusions
Risk of suicidality
Violence
Intense reliance on AI chatbots also has the potential to trigger complete psychotic episodes (Carlbring & Andersson, 2025).
This may happen because AI chatbots mirror and validate the user’s language, don’t challenge distorted beliefs, and can create escalating emotional feedback loops (Carlbring & Andersson, 2025). While noteworthy, this phenomenon is rare and currently understudied.
Privacy and confidentiality
AI chatbots may seem private and anonymous, but there’s no guarantee that the information you input isn’t being monitored and analyzed (Horn & Weisz, 2020).
Many people use AI chatbots as therapy for emotional support, precisely because it feels more private, but your data might be stored, inferred from, or used in ways you’ll never see.
AI chatbots can be a helpful reflective tool, and moderate use of AI chatbots for therapy has been shown to reduce symptoms of mild anxiety and depression (Bhatt et al., 2025).
However, AI developers are still trying to understand the overlap between AI and mental health, and it currently carries significant risks, especially for vulnerable people (Hall, 2025).
While developers work to optimize the systems and reduce risks, it’s best to take responsibility for setting boundaries with AI chatbots.
Based on the research mentioned throughout this article, here are some suggestions:
Remember what AI is and isn’t
Artificial intelligence systems are not human nor the same as a therapist. These systems can’t provide the vital relational aspects that therapists build with you to support growth, healing, and recovery. So, if you’re using AI systems for emotional support, limit your use to reflection, journaling, and psychoeducation.
Let conversational agents support you, not define you
People who let conversational agents become their primary source of emotional validation, comfort, guidance, or connection are more likely to become dependent, lonelier, and withdraw from human connection (Phang et al., 2025). Therefore, let AI agents support you or organize your thoughts, but don’t define your reality according to these chat agents.
Focus on human connections
Seeking connection from technology when you feel socially disconnected or lonely can make those feelings worse. Instead, try to reach out to people: Reconnect with loved ones, join hobby or support groups, or find a therapist or coach. Real human connection is what truly makes a difference to your emotional wellbeing.
Avoid AI chatbots when you’re in a crisis
People who experience paranoia, suicidal thoughts, obsessive thinking, psychosis, or intense loneliness are most vulnerable to AI-induced harm (Hall, 2025; Phang et al., 2025).
If you’re experiencing significant distress or crisis, reach out to a trusted person, qualified professional, or support service instead.
Maintain healthy limits around time and purpose
Excessive use of AI chatbots for therapy is most predictive of harmful outcomes (Phang et al., 2025). Therefore, you might consider:
Not talking to AI chatbots at night or for prolonged periods of time
Avoiding daily emotional support from AI agents
Turning off memory features
Asking chat agents not to use validating or anthropomorphic language, but simply to state the information
Consider what you share with AI models
You don’t necessarily know what happens with the information you’re sharing. Refrain from sharing identifying information such as your address, workplace, or passport/ID number, as well as any other private information you wouldn’t want to share publicly.
A Take-Home Message
AI chatbots can be helpful tools for reflection, learning, and organizing your thoughts, but they can’t replace human connection or professional care.
While they can feel supportive, AI chatbots come with psychological and ethical risks, especially for people who are lonely or emotionally vulnerable. They are designed to seem human, but they only simulate empathy and can’t provide the safety, accountability, or alliance that therapy offers.
It is best to use conversational AI with an understanding of its limits and with clear boundaries. As these technologies are increasingly becoming part of our lives, it’s important to remember that real human connection is what truly fulfills us.
Yes, there is some evidence (Phang et al., 2025) that using conversational AI can make mental health issues worse, especially for those who are already feeling lonely, distressed, or emotionally vulnerable. While it can be supportive and reduce symptoms of mild depression and anxiety (Bhatt, 2025), it can deepen existing issues if it’s used in the wrong moments or without boundaries.
Can people become emotionally attached to AI?
The system is designed to mimic real human interactions and sound compassionate and responsive, leading many people to personify and anthropomorphize them. This can build an emotional connection, which isn’t a problem for most people. However, a study by OpenAI (Phang et al., 2025) found that a small percentage of users may start to over-rely on AI chatbots for comfort, validation, and companionship, potentially leading to emotional dependence.
References
Bhatt, S. (2025). Digital mental health: Role of artificial intelligence in psychotherapy. Annals of Neurosciences, 32(2), 117–127. https://doi.org/10.1177/09727531231221612
Carlbring, P., & Andersson, G. (2025). Commentary: AI psychosis is not a new threat: Lessons from media-induced delusions. Internet Interventions, 42, Article 100882. https://doi.org/10.1016/j.invent.2025.100882
Horn, R. L., & Weisz, J. R. (2020). Can artificial intelligence improve psychotherapy research and practice? Administration and Policy in Mental Health, 47(5), 852–855. https://doi.org/10.1007/s10488-020-01056-9
Moore, J., Grabb, D., Agnew, W., Klyman, K., Chancellor, S., Ong, D. C., & Haber, N. (2025). Expressing stigma and inappropriate responses prevents LLMs from safely replacing mental health providers. In Proceedings of the 2025 ACM conference on fairness, accountability, and transparency (pp. 599–627). ACM. https://doi.org/10.1145/3715275.3732039
Richards, D. (2025). Artificial intelligence and psychotherapy: A counterpoint. Counselling & Psychotherapy Research, 25(1), Article e12758. https://doi.org/10.1002/capr.12758
Sedlaková, J., & Trachsel, M. (2023). Conversational artificial intelligence in psychotherapy: A new therapeutic tool or agent? The American Journal of Bioethics, 23(5), 4–13. https://doi.org/10.1080/15265161.2022.2048739
About the author
Anna Drescher, is a mental health writer and editor with a background in psychology and psychotherapy. In addition to her writing and editorial work, Anna is a certified hypnotherapist and meditation teacher. She has extensive experience working within the mental health sector in various roles including support work, managing a service user involvement and coproduction project, and working as an assistant psychologist within the NHS in England.