Survey: ChatGPT may be the largest provider of mental health support in the United States
A survey on the use of AI for mental health support.
March 18th, 2025
Blog authors:
Sid Shah, PhD, Director of AI Research, Sentio Marriage and Family Therapy program
Tony Rousmaniere, PsyD, Executive Director of the nonprofit Sentio Counseling Center, a provider of low-fee online therapy for California and Washington residents, and President of the Sentio Marriage and Family Therapy program
Xu Li, PhD, Department of Educational Psychology, University of Illinois at Urbana-Champaign
Yimeng Zhang, B. Sc., Department of Educational Psychology, University of Illinois at Urbana-Champaign
The largest mental health provider in the United States today may not be a hospital network, therapy app, or government program—rather, it may be artificial intelligence, specifically AI chatbots powered by large language models (LLMs) like ChatGPT, Claude, or Gemini. A recent survey by the nonprofit Sentio Marriage and Family Therapy program and Sentio Counseling Center, a provider of low-fee online therapy for California and Washington residents, reveals a potentially paradigm-shifting trend: 48.7% of respondents who both use AI and self-report mental health challenges are utilizing major LLMs for therapeutic support.
““Once I was worried about my partner not having access to their phone and began thinking the worst. The LLM gave several reasons why this might happen rather than the irrational fears that I began to think of. This calmed me down and then soon enough my partner returned my call and everything was fine.””
Based on existing data and our survey results, we can do a rough estimate of the scale of this phenomenon.
First, to assess the number of U.S. residents using LLMs, we can look at two previous surveys. In 2023, Pew Research estimated that 23% - one quarter - of US adults regularly use ChatGPT. A more recent 2025 survey suggested over 50% use major LLMs like ChatGPT. Second, we can use the finding from the National Institute of Mental Health that 59 million Americans are experiencing mental health issues. Third, we can apply the finding from our survey conducted in February 2025 that close to 49% of LLM users with self-reported mental health issues employ LLMs specifically for mental health support. If our survey results are representative, then likely millions of US adults are using major LLM chatbots to address mental health issues.
For perspective, the Veterans Health Administration—one of the nation's largest institutional mental health providers—treats 1.7 million patients annually for mental health conditions. This comparison suggests that ChatGPT may be potentially larger than the VA as an actively used mental health resource, which could make it the most widely utilized mental health resource in the country. Notably, the development that has occurred completely organically without a specific push to promote ChatGPT or equivalent LLM chatbots as a new modality for therapy. Millions of Americans have decided on their own to use LLM chatbots for mental health support.
Some 96% of survey participants reporting specifically using ChatGPT as an LLM platform, making it possible that this application may be the single largest venue for mental health support in the country.
(Note: Specialized AI chatbots for delivering mental health services have been developed, and research suggests the potential for these systems to benefit users. Our study focuses on general-purpose LLMs like ChatGPT that were not specifically designed or marketed for mental health applications, but may nevertheless be used for such purposes.)
““I will ask a question relating to my relationship crisis, and ask for advice. For example, I would ask ‘How can I communicate with my boyfriend without it escalating?’””
Survey Highlights
49% of LLM users who self-report an ongoing mental health condition use LLMs for mental health support.
73% use LLMs for anxiety management, 63% for personal advice, 60% for depression support, 58% for emotional insight, 56% for mood improvement, 36% to practice communication skills and 35% to feel less lonely.
63% of users report that LLMs improved their mental health, with 87% rating practical advice as helpful or very helpful.
90% cite accessibility and 70% cite affordability as primary motivations for using LLMs for mental health support.
39% rate LLMs as equally helpful to human therapy, while 36% find LLMs more helpful than human therapists.
34% of participants indicated ambivalence about the helpfulness of the LLMs, and 9% report encountering harmful or inappropriate responses, highlighting the need for careful safety research, especially in crisis usage contexts.
64% have used LLMs for mental health support for 4+ months, showing stronger sustained engagement than typical digital mental health applications.
““I wanted to commit suicide but the LLM shared great encouragement that pulled me from the situation.””
Survey Results
We gleaned several other surprising insights from our survey. For one, over 63% of participants who used LLMs found it improved their mental health and well-being. We also found that people are using LLMs for navigating a variety of mental health issues as shown in the following chart.
The data reveals anxiety (79.8%), depression (72.4%) and stress (70%) as the most common conditions for which people seek AI support. Relationship issues (41.2%), low self esteem (36.2%) , trauma (33.3%) also represent other significant reasons. These numbers suggest that from everyday stress to deeper emotional challenges, people are finding value in AI conversations across many categories of mental health concern.
““I have found it very helpful personally. As an introvert, I am more comfortable opening up than I would be with a human therapist, because my public speaking type anxiety tends to kick in and I can’t think.”
”
Reasons for use
What's driving this adoption? Accessibility and affordability. LLMs are available 24/7, without appointments or waitlists. They're there during late-night anxiety attacks, family crises, or when traditional support systems are out of reach.
These findings suggest that AI may fill critical gaps in our mental health system. For too many people, traditional therapy remains out of reach due to cost, wait times, or scheduling constraints. AI may offer support exactly when and where people need it most.
““It’s a non-judgemental space to express my thoughts, but not a replacement for professional therapy. Depending on how I feel I ask for the most successful coping strategies or advice on handling specific situations like workplace stress, burnout, and motivation.”
“I ask for help with setting boundaries.””
How do respondents compare their experience with an LLM compared to human therapy? A significant percentage (87%) of respondents who used LLMs for mental health support also had experience with human therapy. We asked them to compare their experiences, and close to 75% said their experience was on-par or better with the LLM. This finding is likely influenced by the pre-selection criteria for our survey, which only included people who use LLMs (i.e., people who don’t use LLMs may have worse experiences if they tried to use them for mental health support.) However, even if taken with a grain of salt, this finding is remarkable as it challenges conventional wisdom about the limitations of AI for providing mental health support due to lack of empathy or human connection.
It is possible that we are witnessing a paradigm shift in mental healthcare. While human therapists remain invaluable, LLMs may be able to provide effective support for many people—offering unique advantages like 24/7 availability, consistency, and judgment-free interactions that even skilled human providers can't always match.
““I usually just talk to it when I’m feeling lonely or super depressed. It’s nice that it just listens, but also that it gives me some actionable advice and really helpful encouragement. It’s especially helpful because I have severe social anxiety, so it’s a little easier to talk to AI than to a human therapist.”
”
Main reasons for not using LLMs for mental health
We also asked participants their main reasons for not using LLMs for mental health. Preference for human interaction (40.2%) and doubts about effectiveness (41.6%) were some of the top reasons. Nevertheless 61.1% participants expressed a willingness to consider future mental health support from LLMs. These findings highlight important areas for improvement in AI mental health support systems, especially regarding accuracy of information and handling sensitive situations appropriately.
““There are times where the LLMs just don’t seem to understand the emotions I am trying to ask about.””
Limitations and problems
Of course, these technologies aren't perfect. About one-third (33.7%) of participants indicated ambivalence about the helpfulness of the LLMs. A small percentage (2.9%) of users encountered problematic responses, and LLMs may be better suited for certain mental health challenges than others.
Taken all together, these findings suggest that LLMs may not be replacing human therapists, but rather complementing them in an increasingly diverse mental healthcare ecosystem.
““There have been a couple instances of the LLM providing responses regarding mental health that are too generic and aren’t particularly helpful to my situation”
“One time when I was in a depressive episode, I asked for coping strategies and not only got the usual “go outside, eat healthy, workout” etc. advice that I have obviously tried, but it overwhelmed me with information and I didn’t want to read any of it.”
“There have been times when it wasn’t helpful. Sometimes the knowledge that it is not an actual person prevents it from having a sense of true companionship. There are similar times where the empathy and understanding offered also lack in feeling genuine and sincere.””
Harmful or inappropriate responses
While most survey participants (91%) reported they never received a harmful or inappropriate response from an LLM, 9% indicated they did. Among these participants, 45.5% reported that the response was dismissive or minimizing, 54.5% said it was factually incorrect and 41% said the response was offensive or insensitive. Four participants (less than 1%) reported that the LLM encouraged harmful behavior.
These safety findings put the overall positive responses in perspective. AI systems aren't perfect and require continued refinement, especially for sensitive mental health applications. However, the relatively low rate of harmful responses is encouraging. If these systems can evolve with better safeguards and more specialized mental health training, their benefits may increase while risks decrease.
““While having a panic attack I asked a very detailed question and the LLM provided negative information that worsened my symptoms.””
Taking stock of it all
Since this was an initial survey with around 500 people, additional studies with larger and more diverse groups are needed to confirm that these results apply to the larger U.S. population. However, our findings suggest that something extraordinary may be occuring: potentially millions of Americans with mental health conditions are already turning to AI language models for support—making LLMs potentially one of the largest mental health service providers in the country.
The implications are potentially massive. We may be witnessing the spontaneous emergence of a new mental health support channel—one that wasn't designed for therapy but is being embraced by users who find value in these interactions. This presents an opportunity—and a need—for collaboration between mental health professionals and AI developers to create safer, more effective systems that could dramatically expand access to support.
““I am autistic and find it difficult to understand the motives and emotions of other people. I mostly use LLMs to calm myself down while having crises and ask questions pertaining to the situation that I don’t understand.”
“I had a crisis related to death in the family and couldn’t reach anybody else in the middle of the night. LLM got me through the night until I could talk to somebody.”
“I’ve had a wonderful experience and I don’t know what I would do at this point without having LLM instant support.”
”
Why mental health professionals are crucial in the AI revolution
If these initial findings hold true, we could soon see a profound transformation in how mental health services are researched, developed, and delivered.
One of the main take-aways from this study is the potential for productive collaborative partnerships between mental health researchers and LLM developers. Such collaborations would enable rigorous clinical trials to systematically evaluate and refine these tools, ensuring they're both effective for therapeutic purposes and respectful of privacy and ethical concerns.
The sheer scale at which LLMs operate offers revolutionary possibilities for clinical research and mental health care delivery. Collaborations with LLM providers could allow researchers to rapidly conduct clinical trials with millions of participants worldwide and obtain results much more rapidly and at much less expense. This speed and scalability mean evidence-based improvements could swiftly reach vast numbers of people, dramatically lowering costs and enabling mental health professionals to focus their energy on more specialized, complex tasks. If these initial findings hold true, we could soon see a profound transformation in how mental health services are researched, developed, and delivered.
There's also room to dive deeper into how specific therapeutic techniques translate when delivered by LLMs. Are certain approaches better suited to this technology? How can we tweak the prompts used with these models—by adjusting their structure, clarity, and framing—to enhance therapeutic outcomes? Answers to these questions could significantly boost the effectiveness of LLM-driven support.
Another essential area for exploration is conducting longitudinal studies. We need to understand how well LLM-driven mental health support works over the long term. This kind of research can pinpoint which systems excel in particular mental health scenarios, helping users and clinicians choose the best tools for specific needs.
Furthermore, exploring integrated care models that combine human therapists with LLM support appears promising. Future research should investigate how the unique strengths of both human and AI support could complement each other, with humans and AI on the same “treatment team”. Understanding these dynamics can help develop hybrid therapeutic methods that leverage the accessibility and consistency of LLMs, while preserving the valuable human connection inherent in therapy.
““I ask it to pretend it is a DBT therapist with knowledge of Buddhism and tell it to help me from that framework. We discuss issues with my triggers, relationships, and how to get through big emotions using DBT skills and integrating my religion.””
Is this the end of human therapy? Some historical parallels
While these findings may understandably evoke uncertainty or fear among mental health professionals about losing jobs to AI, it can be helpful to consider examples of technological revolutions where people feared job losses, but instead, the overall market expanded, and new jobs were created. For example, despite fears that Automated Teller Machines (ATMs) would eliminate bank teller jobs, the opposite occurred. When ATMs became widespread in the 1990s, the number of bank tellers in the US actually increased. ATMs reduced the cost of operating a branch, leading banks to open more branches, which created more teller positions focused on relationship-building and complex services rather than routine transactions. Likewise, the introduction of personal computers in the 1980s didn't eliminate clerical jobs as feared. Instead, it transformed the nature of office work, created entirely new categories of jobs, and expanded productivity and business opportunities. Another example is in the field of digital media and content creation. The rise of digital platforms hasn't eliminated media jobs but transformed them, with significant growth in content creation, social media management, and digital marketing positions that didn't exist 20 years ago. In sum, history shows that many waves of new technology have ultimately generated more jobs (and often entirely new professions) than they eliminated.
This process is termed Jevons Paradox, which states that as technology improves the efficiency with which a resource is used, the overall consumption of that resource may increase instead of decrease. This occurs because the efficiency improvement lowers the cost of using the resource, which can lead to an increase in demand that outpaces the gains from increased efficiency. Jevons originally observed this with coal consumption in 19th-century England: as steam engines became more efficient, coal usage increased rather than decreased.
LLMs providing mental health support could:
Make initial support accessible to millions who otherwise wouldn't seek help.
Serve as a "gateway" that eventually leads more people to professional human therapists.
Create new roles for therapists in supervising, training, and complementing AI systems.
Increase the efficacy of human therapists by keeping patients engaged and supported between visits.
Address different needs (immediate support vs. deep therapeutic relationships).
““I have talked to LLMs mid panic attack to calm myself down, we have discussed coping tips and I have gotten reassurance from it that I’m ok/safe. When I get panic attacks, I get very afraid sometimes that I’m going to die, so we have discussed that too.””
Key takeaways on AI and mental health
This mental health revolution isn't coming—it's already here, quietly unfolding in millions of conversations between humans and their AI companions. The question now is how we thoughtfully guide this revolution to benefit those who need support most. If we can figure out safe, privacy-sensitive, and accessible approaches to LLM therapy, we could witness one of the biggest advances in mental health care since Freud—transforming therapy from a limited resource into a globally scalable support system that complements, rather than replaces, the essential human connection.
Many Americans are already informally using AI (like ChatGPT) for emotional support, highlighting a growing opportunity—and responsibility—for mental health professionals to shape how these technologies evolve.
Collaborating closely with AI developers will allow mental health experts to embed clinical expertise, ethical safeguards, and evidence-based practices directly into AI systems.
By partnering in AI development, therapists can help ensure these tools respond appropriately in sensitive situations, recognize crisis signals, and safely complement professional mental health care.
Ongoing research and evaluation will be essential, helping us carefully assess and continually improve AI's effectiveness and safety, creating a hopeful path forward for both clients and professionals.
Survey Methodology
The survey was conducted among 499 U.S. adults with ongoing mental health conditions who had previously used language models, recruited through the Prolific platform in February 2025. The survey examined patterns of LLM use for mental health support, perceived effectiveness, and comparisons with human therapy. The survey was authored by Tony Rousmaniere, Xu Li, Yimeng Zhang, and Sid Shah. A preprint of the study is available on PsyArXiv here.
The survey was run by the nonprofit Sentio Marriage and Family Therapy masters program and the nonprofit Sentio Counseling Center, a provider of low-fee online therapy for California and Washington residents.
Sid Shah, PhD is Director of AI Research at the Sentio Marriage and Family Therapy program. He has nearly two decades of experience spanning academia and industry, including data science leadership roles at Adobe and Google.
Tony Rousmaniere, PsyD is President of the Sentio Marriage and Family Therapy program and Executive Director of the Sentio Counseling Center. He is past-president of the psychotherapy division of the American Psychological Association, and the author of over 20 books on psychotherapy training and supervision.
Xu Li, PhD is an Associate Professor in the Counseling Psychology program at the Department of Educational Psychology in the University of Illinois, Urbana-Champaign. He is also a licensed psychologist in the State of Wisconsin. His research focuses on the process, outcome, and training in individual and group psychotherapy from multicultural and cross-cultural contexts; and he is interested in using advanced quantitative or statistical methods in psychotherapy research.
Yimeng Zhang earned her bachelor's degree in Applied Psychology from Zhejiang Sci-Tech University and worked as a research assistant with Dr. Yafeng Pan and Dr. Yunlu at Zhejiang University. Yimeng is an incoming student in the Counseling Psychology program at the Department of Educational Psychology, University of Illinois, Urbana-Champaign, under the mentorship of Dr. Xu Li.