By Marissa Muniz | LTVN Reporter
Over the last few years, the use of AI has skyrocketed, with 65% of Americans reporting that they utilize chatbots to answer immediate questions. Platforms like Meta and ChatGPT aren’t just appearing in workplaces and classrooms — they’ve also made their way into our personal lives. People now turn to AI as a data analyst, tutor, dietitian and surprisingly even for emotional support. But play therapist? That’s where things get dangerous.
At first glance, this may sound reasonable. After all, AI doesn’t judge or bring its own feelings into the mix, but that’s exactly the problem. AI isn’t capable of emotions or empathy, qualities that are essential in therapy.
For starters, therapists are sworn to secrecy by doctor-patient confidentiality. AI, on the other hand, stores and processes data in ways that raise serious privacy questions. Even the CEO of OpenAI has expressed concern about the safety of AI. If that’s the case, should we really be trusting these systems with our deepest secrets and personal struggles? Think of this like that two-faced friend who remembers information to throw in your face again later.
Beyond privacy, the simple truth is that people go to therapy to gain a deeper understanding and unpack their emotions. AI doesn’t actually understand emotions; instead, it processes patterns of words. This software was not designed to distinguish between “hey” and “heyyy,” resulting in inconsistent advice.
AI also has the potential to worsen existing mental health struggles. For someone already battling anxiety, depression or grief, reading a tone-deaf response can make them feel even more isolated. Each “wrong” answer chips away at trust and deepens the sense of being misunderstood. Real therapists are trained to spot subtle signs, adjust their tone and respond with care. Skills that AI simply does not possess.
One of the most alarming risks of relying on AI for emotional support is the very real danger of wrongful deaths. There have already been cases where people turned to AI in moments of crisis and were met with harmful and even fatal advice. Instead of receiving life-saving guidance or empathy, these individuals were left with responses that worsened their state of mind and resulted in death or injury.
Take ADHD coach Kendra Hilty, for example. She went viral this August after confessing she had fallen in love with her psychiatrist. While she technically had a licensed therapist, she spent most of her time talking to her AI chatbot, “Henry,” who only fed into her delusions. Instead of helping her work through reality, Henry fed her validation that her psychiatrist was manipulating her.
Throughout her videos, she mentions at one point that the bot taught her about countertransference, which is when a therapist/psychiatrist develops feelings for a client. Hilty then started seeing her psychiatrist’s every move through that lens, convinced he must secretly love her back.
The story blew up on TikTok, and for good reason. It illustrates precisely why AI poses a danger in the mental health space. It doesn’t challenge unhealthy thoughts; it just reflects them back. If you’re looking for real help, you don’t need AI fueling your delusions; you could just call up your delusional friend for guidance instead. At least then you’d get a human response.
AI can be a powerful tool in various aspects of life. But when it comes to something as personal and sensitive as mental health, we need to think twice before handing the job of a trained, empathetic therapist over to a machine.