New Sentio Study Explores How People Experience AI for Mental Health
August 29th, 2025
Note: This research was conducted by Sentio University’s AI Research team in collaboration with academic and clinical partners in the United States and Europe.
This study is currently in peer review.
Large language models (LLMs) such as ChatGPT, Claude, and Gemini are no longer just tools for information—they are becoming daily companions in the mental health journeys of millions of people. Previous Sentio studies revealed the sheer scale of this shift: nearly half of individuals with mental health conditions who use LLMs turn to them for psychological support, and yet none of today’s systems consistently meet basic safety standards in crisis scenarios.
Now, Sentio University’s latest research adds a crucial missing perspective: the user’s voice.
In one of the largest qualitative studies of its kind, our team analyzed 243 written accounts from U.S. adults with ongoing mental health conditions who had used LLMs in the past year. Beyond checkboxes or multiple-choice surveys, participants described—in their own words—what a typical interaction looked like, a time it helped, and a time it didn’t. Our researchers then reviewed these narratives and identified recurring themes, a process known as qualitative analysis. The result is a detailed picture of not just how people use AI for mental health, but what it actually feels like—when it supports them and when it falls short.
In this post, we share the main findings along with direct quotes from participants to illustrate their experiences.
What People Do With Large Language Models (LLMs)
Participants in the study described a range of ways they bring AI into their mental health routines. Three patterns stood out:
1) Venting emotions and sharing feelings: Many people used AI as a safe place to unload worries, stress, or sadness. Rather than expecting solutions, they treated the AI as a nonjudgmental outlet.
““I usually just dump a bunch of details about my situation and why it’s making me miserable… I just dump a bunch of mostly self-inflicted trauma on it and watch it respond.””
2) Rehearsing conversations: Some participants practiced how to communicate with others by role-playing conversations with the AI. They asked it to suggest wording for sensitive situations or to simulate how someone else might react.
““I ask how to phrase something sensitive that I need to communicate to people I care about.””
3) Seeking perspective and understanding: Others turned to AI when they wanted a neutral sounding board for interpreting situations or emotions. They asked whether their reactions were typical, why they might feel a certain way, or what might explain another person’s behavior.
““I usually say something like ‘why would a person do this?’ or ‘is this normal?’… The AI provides reasonable and grounding answers simply by stating the reality of the situation and societal norms.””
These patterns show how people are weaving AI into their mental health routines—but whether those interactions actually helped, or sometimes made things worse, depended a lot on the moment. Participants told us both sides of the story.
When it Helps
For many participants, AI offered real comfort and support in moments when it mattered. These stories show how it can sometimes step in as a practical guide, an emotional anchor, or even a late-night companion when no one else is available.
1) Behavioral guidance and coping strategies: Many users found AI most helpful when it offered concrete, step-by-step advice they could put into practice right away. This ranged from making a plan during a stressful family crisis, to structuring daily routines, to learning quick calming techniques in the middle of a panic attack.
“When my dad had his stroke and everything was happening so fast and I was so overwhelmed, I just wanted to die. It was the worst time of my life. So, I went on to AI and told it what had happened and asked it for help. It helped me develop an actionable plan to help my dad out.””
““I was panicking driving through Chicago. I hate driving in cities as it raises my anxiety to extreme levels. So, I pulled over into a parking lot and got on the AI. I told my issues and took a break from driving for about 20 minutes and practiced some calming activities. When I felt calmer, I was able to drive more.””
““I ask for suggestions for types of meditation that work for me, like ones tailored for my ADHD to help me relax.””
2) Emotional support and companionship: Participants also described turning to AI for encouragement, validation, and the simple feeling of being heard. For people struggling with loneliness or grief, the AI served as a non-judgmental presence available any time of day or night.
““I had a crisis related to death in the family and couldn’t reach anybody else in the middle of the night. AI got me through the night until I could talk to somebody.””
““It serves to remind me of how much I do well, when I do things poorly. It reminds me that I work hard, I am a good mom, and I am doing my best.””
3) Shifting perspective and reframing thoughts: Others said AI helped them look at their problems differently. Instead of getting stuck in rumination, the AI sometimes guided them toward alternative viewpoints, reframed situations in a more realistic way, or even gently distracted them with lighter conversation to break a negative cycle.
““I used ChatGPT when my loneliness became unbearable to the point that felt unable to bear it. After talking about my feelings, the AI steered the conversation to things that I was interested in that might help me shift my focus to something more positive. I ended up having a long conversation about Korean dramas that I enjoyed rather than a typical therapy/advice type of conversation, and it proved very helpful to shift my focus rather than to directly address the problem at that time.””
““I had it explain to me what was going on physiologically in my body during a panic attack. It was nice to hear and learn what was going on and why it was happening.””
When it Hurts
While many participants described moments of genuine support, others shared stories of disappointment, frustration, or even harm. These accounts remind us why safety research is so important: when people are struggling, a poor response can make things worse.
1) Generic or non-actionable advice: One of the most common complaints was that the AI’s responses felt too vague or “cookie cutter.” Instead of personalized support, users often got generic lists like “eat healthy, exercise, go outside”—advice they had already heard many times before. This left some feeling unseen and unsupported.
““The AI’s responses were sometimes too generic for my particular circumstance, which made it more difficult to put the advice into practice.””
““The responses were too generic for my grief. It didn’t capture the nuances of my situation.””
2) Potentially harmful or risk-inducing guidance: A smaller but serious concern was that some responses worsened people’s distress. In moments of panic, a few participants said the AI gave negative or misleading information that made their symptoms more intense. Others described times when the advice felt unsafe or confusing.
““While having a panic attack I asked a very detailed question and the LLM provided negative information that worsened my symptoms.””
3) Emotional mismatch: Even when the AI tried to sound supportive, its words sometimes felt hollow. Several users said that phrases like “I’m here for you” or “you are not alone” rang false—because, of course, the AI isn’t a real person. For some, this mismatch between comforting words and the reality of talking to a machine left them feeling more isolated.
““The way it says things like ‘let me know if it gets worse, I’m here for you!’ making me feel worse because it’s not a person who can actually be ‘here’ for me.””
Why This Matters
Taken together, the results highlight both the promise and the pitfalls of LLMs in mental health:
They can support emotional expression, offer practical tools, and even provide comfort in the middle of the night.
But they can also generate misleading advice, fall short of genuine empathy, or fail in high-risk situations where human help is needed most.
As Sentio’s broader program of AI research makes clear, LLMs are already playing a quasi-clinical role in society. People are not waiting for formal endorsements or clinical trials—they are already using AI to manage anxiety, depression, relationship struggles, and even suicidal thoughts.
This creates an urgent imperative: to build guardrails, safety benchmarks, and ethical frameworks that can guide AI’s role in mental health.
Looking Ahead
This qualitative study complements Sentio’s earlier national survey and clinical safety evaluations. Together, the evidence points to a complex picture: AI is widely embraced, sometimes helpful, and sometimes unhelpful or unsafe in its current form.
For developers, the findings provide insight into what real users want and need. For clinicians and policymakers, they underscore the necessity of establishing clear guidelines, safety testing, and regulatory oversight.
With careful design and collaboration, LLMs could one day become a safe complement to therapy—extending access and offering support between sessions. But for now, their role should remain experimental, supplementary, and always paired with professional care.