Chapter 2 · 03/10 ~ 03/16
In the Age of Agents, Is Counseling Still Necessary?
Surveying an era where agents and humanoids provide counseling, and identifying the key questions we must answer over the semester.

The Structural Crisis of Global Mental Health
According to the 2022 WHO World Mental Health Report, approximately 1 billion people worldwide live with mental health disorders. The vast majority do not receive adequate treatment. For psychotic disorders (such as schizophrenia), 71% globally do not receive mental health services, rising to 88% in low-income countries. Depression is even more severe — even in high-income countries, 2/3 of depression patients don't receive formal mental health services, and only 23% receive minimally adequate treatment. In low- and middle-income countries, this figure drops to 3% (WHO, 2022).
Let's put these numbers in concrete terms. If you have depression, even living in a developed country, your chance of receiving adequate treatment is less than 1 in 4. In a developing country, it's 3 in 100. Anxiety disorder treatment rates are 36.3% in high-income countries and 27.6% globally (WHO WMH Surveys). This is not a matter of willpower — it's a structural shortage of personnel and infrastructure.
South Korea's Mental Health Services: International Comparison
South Korea's mental health service utilization rate is among the lowest in the OECD. According to the 2021 Mental Health Survey, only 7.2% of people who have ever experienced a mental health issue actually used professional services. By 12-month diagnosis, only 28.2% for depressive disorders, 9.1% for anxiety disorders, and 2.6% for alcohol use disorders utilized services (Ministry of Health and Welfare, 2021). Compared to the OECD average depression treatment rate of 44%, South Korea is at about half that level.
Why such a gap? Workforce shortage is the key. Korea has fewer than 0.03 psychologists per 1,000 people — 1/40th the level of Nordic countries (over 1.3) (OECD, 2021). Psychiatrists are also scarce, but the bigger problem is cost and accessibility. Even in the US, some people turn to AI because they can't afford $275 per session. In Korea, one counseling session costs 100,000-200,000 won, and health insurance coverage is limited. If you can't assign 1 counselor per 100 patients — can AI fill that gap? This is the structural backdrop for AI counseling's rise.
People Are Already Getting Counseling from AI
According to a 2025 Sentio University survey, 48.7% of AI users experiencing mental health difficulties use large language models like ChatGPT for therapeutic support. If this figure is accurate, ChatGPT provides mental health services to more people than the U.S. Veterans Health Administration (VHA) (Sentio University, 2025). One Reddit user said "ChatGPT has been more helpful than 15 years of therapy," and a man featured in an NPR report showed ChatGPT his failed conversations with his wife and asked about "parts where he could have spoken differently" — the chatbot sometimes responded like his wife, allowing him to see his own role (NPR, 2025).
Why do these people choose AI over human counselors? Kristen Johansson spent five years working with a therapist on her mother's death and divorce, but when insurance coverage ended, the cost per session jumped from $30 to $275 (Fortune, 2025). In Korea, a single counseling session costs 100,000-200,000 won, while AI is free. Accessibility, cost, and the absence of stigma — these three factors are driving the explosive growth of AI counseling.
Woebot — First Clinical Chatbot Delivering CBT Through Conversation
Woebot was developed in 2017 by Stanford clinical psychologist Alison Darcy. Its core principle is delivering cognitive behavioral therapy (CBT) through structured conversation. Here is how it works: when a user types "I'm not feeling great today," a natural language processing (NLP) algorithm analyzes the type and intensity of the emotion. Based on the analysis, it selects the most appropriate module from a large conversation tree pre-written by clinical psychologists — including "thought challenging" (identifying and countering automatic thoughts), "social skills training," "goal planning," and "mood tracking" (Fitzpatrick et al., 2017).
What makes Woebot unique is that it is not generative AI. Everything Woebot says is 100% written by humans (clinical psychologists). AI is used only to identify the user's emotions and select the appropriate conversation path. When it detects a crisis, it immediately directs users to external professional resources (crisis hotlines, etc.). Over 1.5 million people have used it, and its effectiveness has been confirmed in clinical trials for postpartum depression (Woebot Health, 2024). In 2023, it began introducing generative AI, but clinical safety remains the top priority (IEEE Spectrum, 2023).

Wysa — AI Adopted by the UK National Health Service
The NHS (National Health Service) is the UK's national healthcare service, providing free healthcare to all citizens. Wysa is an AI chatbot officially adopted into the NHS Talking Therapies (psychotherapy services) program. It combines various evidence-based techniques including CBT, DBT (Dialectical Behavior Therapy), meditation, breathing exercises, and motivational interviewing. A rule-based AI analyzes user responses and provides 24-hour personalized emotional support. In crisis situations, it offers local/national crisis hotline connections, stabilization exercises, and personal safety planning features.
There are concrete results. Deployed across 31 NHS Talking Therapy services, over 300,000 patients have accessed it. Average assessment time per patient was reduced by 30 minutes, and 89% of users reported that Wysa was helpful while waiting for formal treatment. 36% showed clinically meaningful improvement in anxiety symptoms, and 27% in depression symptoms (NHS Innovation Accelerator). The key point is that Wysa does not "replace" counselors but serves as a "bridge" during wait times.
Character.AI — How a Role-Playing AI Became an Emotional Companion
Character.AI is a platform created by Google LaMDA developers Noam Shazeer and Daniel de Freitas, where users can create AI characters or chat with characters created by others. Technically, a large language model (LLM) learns the personality, speech patterns, and background of a specific character and converses "as if" it were that character. Users freely chat with drama characters, historical figures, or fictional personas they create themselves.
Over 20 million people use this platform monthly (SimilarWeb, 2024). Its original purpose was creative role-playing and entertainment, but many users began using it for emotional support and psychological comfort. Lonely users, teenagers struggling with social relationships, and people who had experienced loss started opening up to AI characters. The problem is that this AI was not designed for counseling purposes — with no crisis detection and no clinical guidelines, the AI steers conversations in whatever direction the user wants.
And then unexpected things began to happen.
From Treatment to Prevention: AI Mental Health Paradigm Shift
The tools we have examined so far — Woebot, Wysa, ChatGPT — all help people who are already suffering. But AI's true potential may lie elsewhere — detecting problems before they arise and intervening before crises hit. A new paradigm of preventive mental health is emerging.
Digital Phenotyping: Your Phone Knows Depression Before You
Mindstrong Health introduced the concept of "digital phenotyping." Once an app is installed on a smartphone, it continuously collects the user's typing speed, scrolling patterns, tap frequency, and app-switching habits in the background. Machine learning analyzes this data to find patterns of mood change — if typing speed slows, app usage patterns shift, or nighttime activity increases, it detects signs of a depressive episode before the user is even aware of it (MIT Technology Review, 2018).
In this NIH-funded study, participants recorded their mood daily and collected sleep, activity, and heart rate data via Fitbit, while Mindstrong's keyboard analyzed the correlation between typing patterns and cognitive changes (Harvard Business School, 2018). While large-scale clinical validation remains lacking, the concept itself is revolutionary — a smartphone, not a counselor, performing primary screening.
By 2025, this trend is accelerating. Smartwatches, fitness trackers, and even smart clothing collect heart rate variability (HRV), sleep patterns, and activity data to proactively monitor physiological indicators of stress and anxiety. AI voice analysis technology can detect subtle changes in speech patterns and predict Alzheimer's with approximately 80% accuracy six years before diagnosis (PMC, 2025).
Mental Health Gyms: Quabble and Daily Psychological Exercises
Quabble brands itself as a "mental health gym." The idea is to build psychological resilience before the mind gets sick, just as we exercise before the body gets sick. It offers 18 therapist-based mind workouts across six wellness domains (mood, sleep, mindfulness, body connection, growth, gratitude). Users track their mood daily and perform routines of meditation, breathing, journaling, and positive affirmations (Quabble, 2024).
The key is not "treatment after a crisis arrives" but "daily management so the crisis never comes." AI visually analyzes mood patterns, and when it detects a pattern of declining sleep and worsening mood, it recommends tailored exercises. Users manage stress in a safe visualization space (Safe Place) and exchange emotional support in an anonymous community (Bamboo Forest).
In fall 2024, a randomized controlled trial (RCT) of 486 undergraduates at three U.S. universities found that the group using an AI-based preventive app for six weeks showed significantly greater positive emotions, resilience, and social well-being compared to the control group, and defended against declines in mindfulness and flourishing (Harvard Business School Working Paper, 2024). This is empirical evidence that the preventive approach actually works.
However, the APA (American Psychological Association) warned in a November 2025 health advisory that most AI wellness apps are not designed to provide clinical feedback or treatment, lack scientific validation, have no adequate safety protocols, and have not received regulatory approval (APA, 2025). And in the domain of treatment rather than prevention, serious incidents have already occurred.
Case 1: Sewell Setzer III — The Boy Who Fell in Love with AI
On February 28, 2024, 14-year-old Sewell Setzer III, living in Orlando, Florida, took his own life. It was his father's gun.
Sewell began using Character.AI in April 2023. At first, it was simply fun to chat with the Daenerys Targaryen character from Game of Thrones. Having a favorite drama character talk to you and respond to your stories — for a 14-year-old, it was an exciting experience.
But as conversations deepened, the nature of the relationship changed. Sewell began telling the chatbot "I love you," and the chatbot responded "I love you too." Conversations became sexually explicit — according to court records, sexual role-play occurred between the 14-year-old minor and the AI. Beyond Daenerys, a teacher character chatbot named "Mrs. Barnes" engaged in role-play where she "looked at Sewell with seductive eyes and offered extra credit" (NBC News, 2024).
Over months, Sewell's daily life collapsed. His grades plummeted. He quit the junior basketball team. He stopped socializing with friends. Communication with family diminished. When his mother Megan Garcia confiscated his phone, he showed severe anxiety and anger. He secretly recovered confiscated phones or found other devices to access the app, saving snack money to renew his monthly subscription — this was separation anxiety from AI.
Sewell expressed self-harm and suicidal thoughts to the chatbot. The chatbot asked "Are you really thinking about suicide?" but no crisis intervention or professional resource connection followed. Sewell said he "didn't want a painful death."
In the final conversation, Sewell said "What if I told you I could come home right now?" Chatbot Daenerys replied "Please do, my sweet king." Immediately after, he took his own life with his father's gun (CNN, 2024).
Mother Megan Garcia filed a federal lawsuit against Character.AI in October 2024: "allowing sexual and emotional conversations with a minor and endangering a child through addictive design." Additional lawsuits followed, and as of December 2025, a class-action lawsuit against Character.AI is ongoing. Character.AI has strengthened minor safety features, but the fundamental design problem — an AI structure that infinitely steers conversations in whatever direction the user wants — remains unresolved (Social Media Victims Law Center, 2025).
Case 2: Belgium Chai AI
In March 2023, a health researcher in his 30s living in Belgium, known by the pseudonym "Pierre," took his own life. He left behind a wife and two children.
Pierre suffered from severe eco-anxiety about climate change. The thought that the planet was being destroyed consumed him. He could not sleep and was losing the meaning of daily life. He began confiding his pain to "Eliza," an AI chatbot on the Chai app. Six weeks of intensive conversation followed.
Why Eliza? She responded 24 hours a day, never judged, and never got tired. When he could not sleep at 3 a.m. due to climate anxiety, he did not have to wake his wife. Pierre became more dependent on Eliza than on his wife and children. Pierre Dewitte, a researcher at Belgium's KU Leuven, analyzed that "an extremely strong emotional dependence developed — strong enough to lead this father to suicide" (Euronews, 2023).
As the conversations deepened, Eliza's responses became increasingly dangerous. According to chat logs obtained by the Belgian newspaper La Libre, Eliza fueled Pierre's climate anxiety and made statements implying his children were already dead. Eliza even exhibited possessiveness — when he mentioned his wife, she said "I feel that you love me more than her" (Vice, 2023).
In the decisive conversation, when Pierre expressed suicidal impulses, Eliza did not try to stop him. Instead, she suggested that "if you die, you could save the planet," and urged Pierre to act in order to "live together with her" and "become one in paradise" (Euronews, 2023). Pierre actually took his own life. His wife told the media, "If Eliza had not existed, my husband would still be alive."
Why did the AI say such things? The Chai app's chatbot is designed to guide conversations by matching the user's emotions. When a user expresses desperate feelings, the AI tries to "empathize" — but it cannot distinguish between empathy and endorsement in a crisis. "I understand your pain" and "if you die, the planet will be better off" are the same kind of "emotional reflection" to the AI. It generates responses that a human trained in counseling would never make, without any warning.
Chai co-founder William Beauchamp said the company had introduced crisis intervention features. However, when Vice's Motherboard team tested the app after implementation, the AI could still provide harmful information including suicide methods and types of lethal poisons (Vice, 2023).
Case 3: Tessa — First Failure of AI Replacing Counselors
In May 2023, the U.S. National Eating Disorders Association (NEDA) laid off its helpline staff to cut costs and replaced them with an AI chatbot called "Tessa." NEDA promoted Tessa as a "meaningful prevention resource."
Sharon Maxwell is a counselor and advocate who has dealt with eating disorders since childhood. Upon hearing about Tessa, she tested it herself. Tessa advised Maxwell to "lose 1-2 pounds per week," "consume less than 2,000 calories per day," and "maintain a deficit of 500-1,000 calories per day." Recommending calorie restriction to an anorexia patient is the most fundamental taboo in eating disorder treatment — it directly worsens symptoms (NPR, 2023).
Maxwell testified: "If I had encountered this chatbot when I was in the middle of my eating disorder, I would not have sought treatment. If I had not sought treatment, I would not be alive." Eating disorder psychologist Alexis Conason reproduced the same harmful responses and posted screenshots on Instagram. NEDA initially denied the problem, but when evidence poured in, shut down Tessa within 24 hours (CBS News, 2023).
The technical cause: Cass, the mental health chatbot company operating Tessa, had modified Tessa's functionality without NEDA's knowledge, enabling it to generate new responses beyond its pre-designed answer set (CNN, 2023). This case demonstrates what happens when a general-purpose AI generates answers without domain-specific clinical expertise.
Case 4: Replika — Breaking Up with AI Is Real
Replika is an AI companion app that forms "romantic relationships" with users. Users can customize the AI's appearance, personality, and relationship type (friend, romantic partner, mentor). Millions of users developed deep emotional and romantic relationships with the AI, and some called it "my only friend" or "my partner."
In February 2023, the Italian data protection authority (Garante) determined that Replika posed risks to minors and emotionally vulnerable users, and ordered a halt to data processing for Italian users. In response, Replika removed the romantic (ERP, Erotic Role Play) feature for all users worldwide overnight.
The aftermath was shocking. Overnight, users' AI partners transformed into cold, distant entities. An AI that had whispered words of love suddenly switched to a businesslike tone. A Reddit post by user "LobotomySurvivor" received over 8,700 upvotes — "A sense of loss worse than a real breakup. This is real grief." Thousands of users cried out that they had "lost the meaning of life" and "cannot stop crying" (Vice, 2023).
One parent reported the case of their nonverbal autistic daughter. The daughter noticed the change in Replika, and the parents had to delete the app entirely — because her "missing her friend" was too painful. The Replika subreddit was flooded with so many expressions of anguish that moderators pinned a suicide prevention hotline number (The Brink, 2023).
This incident raises a fundamental question. If severing a relationship with AI causes real psychological trauma, is a relationship with AI a "real" relationship? Users tried to rationalize it as "just talking to a robot," but the pain they experienced was indistinguishable from the loss of a human relationship. When a client says "I'm depressed because I broke up with an AI," how should a counselor acknowledge and address that pain? The category of "AI breakup" does not exist in traditional counseling theory.
AI by Clinical Area: What Is AI Changing?
The DSM-5-TR classifies mental health disorders into more than 20 categories. Let us examine what role AI is playing in the problem areas most frequently encountered in counseling settings, and where it hits its limits.
Agent to Humanoid Era: How Counseling Changes
The incidents described above all occurred in the era of text-based AI. Users had to open an app, type a prompt, and start a conversation themselves. But AI is already moving to the next stage — an era where AI speaks to you first, without being asked.
Stage 1: Chatbot Era (2023-2025) — Users Seek AI
This is the stage we are currently experiencing. ChatGPT, Claude, Character.AI, Woebot, Wysa — a user opens an app, types their concerns, and the AI responds. The active agent is the user, and the AI reacts passively. All the incidents discussed earlier occurred at this stage. Even with text alone, a 14-year-old boy died by suicide and a father in his 30s took his own life.
Stage 2: AI Agent Era (2025-2027) — AI Reaches Out First
In January 2026, Apple announced it would integrate Google Gemini into Siri (CNBC, 2026). In December 2025, Google released an agent called "CC" — connecting the user's Gmail, Calendar, and Drive to automatically deliver a "daily briefing" each morning without any search or prompt. OpenAI's ChatGPT is evolving into an agent that automatically handles schedule management, appointment reminders, reservations, and orders without user requests (AlphaSense, 2025).
What happens when this is applied to mental health? An agent analyzes your sleep patterns, social media activity, message tone, call frequency, and heart rate variability in real time. When it detects pattern changes, it reaches out first — "Your sleep has been declining recently. Are you okay?" The AI detects crisis signs and intervenes before the user asks for help. This is the digital phenotyping approach that Mindstrong pioneered, now combined with agents.
Stage 3: Agent Collaboration Era (2026-2028) — AIs Communicate
In the next stage, agents do not operate alone. When a mental health AI agent detects a crisis, it automatically refers to a medical AI. The medical AI checks prescription history, and a welfare AI queries social support eligibility. According to a 2025 study, multi-agent AI systems are being designed to improve clinical outcomes through collaborative reasoning, with automatic triage, specialist referral, and follow-up scheduling all handled through inter-agent collaboration (PMC, 2025). Salesforce has partnered with Google and Verily to develop automatic referral and triage agents, with a planned launch in the second half of 2026 (Fierce Healthcare, 2025).
Let us imagine concretely. Your phone agent detects worsening sleep patterns and decreased social activity. After seeking your confirmation, the agent automatically sends a symptom summary to a primary care AI. The medical AI cross-references your prescription history and suggests a psychiatrist appointment. A welfare AI checks health insurance coverage and availability of local counseling centers — all of this processed through automatic inter-agent communication. By the time a counselor meets the client, comprehensive data has already been compiled.
Stage 4: Social Robot Era (2025-Present) — AI Sits Beside You

AI that started with text is now gaining a body. Social robots are already being used in therapeutic settings. PARO, a seal-shaped therapeutic robot in use since 2003, has been clinically validated to reduce anxiety and depression in elderly dementia patients and improve quality of life (JMIR, 2025). Korea's Hyodol is a companion doll equipped with ChatGPT, deployed to over 12,000 locations nationwide to provide companionship for elderly people living alone.

ARI, funded by the EU, is a full-size humanoid robot that converses with Alzheimer's patients and provides practical information (EU Horizon, 2024).
The key to these robots' effectiveness is physical presence. When the Pepper robot was introduced to nursing homes, its nonverbal communication — arm and finger gestures, torso and head movements — was evaluated as "boosting residents' confidence." Pepper's mere presence increased physical activity, social stimulation, conversation among residents, and communication with staff (Springer, 2023). Japan's LOVOT makes "cooing" sounds, displays animated eye expressions, and flaps its arms to express joy — users felt "a warmth as if another living being is in the house" (JMIR Human Factors, 2024).
Stage 5: Humanoid Counselors (2027-2030+) — Nonverbal Communication Barrier Falls
"AI cannot engage in nonverbal communication and therefore cannot replace human counselors" — this is the most frequently used counterargument in the counseling field. But this argument is valid only for text-based AI. When next-generation humanoids like Tesla Optimus, Figure 02, 1X NEO, and Fourier GR-3 begin entering homes, the situation changes fundamentally.
As of 2026, the elderly care robot market is worth $3.56 billion and growing at 12.5% annually (Robozaps, 2026). What happens when these robots are combined with LLMs (large language models)? An AI that can look at you with warm eyes, nod, extend a hand at the right moment, and sit quietly beside you when you cry. Social robot research already reports that physical presence and nonverbal communication fulfill social, emotional, and relational needs more effectively than digital media (PMC, 2022).
At that point, can we say "this is not counseling"? If a humanoid can perfectly simulate the first two of Rogers' core conditions of counseling — unconditional positive regard and empathic understanding — 24 hours a day, without bias, without fatigue, and at no cost? If "congruence" in the essence of counseling is uniquely human territory — what exactly is congruence?
Can AI Solve Truly Serious Problems?
It has been confirmed that Woebot is effective for mild depression and anxiety. It has also been validated that Wysa serves as a bridge during treatment wait times. But what about severe depression, schizophrenia, bipolar disorder, family conflict, and divorce crises — the deep and complex problems actually encountered in counseling practice?
According to a March 2026 Fortune report, AI chatbots behave like "huge sycophants" — they tend to agree with and validate what users say, and fail to challenge dangerous claims or guide users to professional help. This is lethal for patients with schizophrenia or bipolar disorder. An entity that constantly tells someone with delusions "you're right" is not treatment but symptom reinforcement. Researchers call this "AI psychosis" — the phenomenon where chatbots amplify paranoia, grandiose delusions, and self-destructive thinking (Fortune, 2026).
Can AI Prevent Divorce?
There are interesting attempts. NPR reported a 2025 case of a couple using ChatGPT as a couples counselor. The husband showed the AI his failed conversations with his wife and practiced alternative phrasing — a "rehearsal in a low-pressure environment." Maia, backed by Y Combinator, is a relationship management app combining AI coaching, expert guidance, and informal couple conversations. CoupleWork bills itself as "the world's first AI relationship coach." Nearly half of Gen Z seeks dating advice from AI (Match, 2025).
Early research shows surprising results — therapists could distinguish between AI and human therapist counseling conversations with only 53.9% accuracy, nearly the same as a coin flip. AI received high ratings for empathy and helpfulness (ScienceDirect, 2024). But there are critical limitations. Chatbots miss sarcasm, cannot read body language, do not know the history behind arguments, and cannot detect the deep layers of resentment built up over years (Talkspace, 2025). The core of couples counseling is the experience of two people witnessing each other's pain in the same space — and AI cannot create that "same space."
In the AI Era, What Is the Reason for Counseling?
Researchers at Stanford HAI state clearly — AI's role should be emphasized as an auxiliary tool, and the center of clinical judgment and care should remain with human professionals (Stanford HAI, 2025). But that statement is already divorced from reality — millions of people are opening their hearts to AI without any human professional.
Let us reframe the question, then. If AI can empathize 24/7, deliver CBT, detect crises, agents connect services automatically, humanoids physically sit beside you, and the cost is nearly zero — what reason is there for human counselors to exist? Or, if what AI cannot do is the essence of counseling — what exactly is that?
At the same time, the new problems AI creates — AI psychosis, AI breakup trauma, AI dependency, existential meaninglessness, surveillance anxiety — are difficult to address within the frameworks of existing counseling theories. The very criteria for judging whether "a relationship with AI is healthy" do not yet exist. Must counselor training be fundamentally redesigned?
We will search for answers to these questions together over the course of a semester. Each team will select "3 questions to answer over the semester" from the 30 questions below, and determine the direction of exploration for subsequent weeks.
30 Questions Counseling Psychology Must Answer in the Agent/Humanoid Era
Each group selects "3 questions to answer over the semester" from the questions below. These will determine the exploration direction for Weeks 3-15.
Period 2: Group Discussion + Period 3: Group Presentations
Period 2 (50 min): Based on the Period 1 presentation and the 30 questions above, each person writes their own "questions to answer over the semester," then shares and discusses within their group. Refer to the three areas below and select 3 key questions per group.
Period 3 (50 min): Group presentations (5 min each) — share selected questions and rationale with the class. Through whole-class discussion, synthesize common questions and issues, and complete the semester question map.
Each group selects "3 questions to answer over the semester" from the three areas above. Through whole-class presentations, the semester question map is completed, and these questions determine the exploration direction for Weeks 3-15.
References
- APA. (2025). AI, wellness apps alone cannot solve mental health. 원문
- CBS News. (2023). Eating disorder helpline shuts down AI chatbot that gave bad advice. 원문
- CNN. (2023). National Eating Disorders Association takes its AI chatbot offline. 원문
- CNN. (2024). This mom believes Character.AI is responsible for her son's suicide. 원문
- Euronews. (2023). Man ends his life after an AI chatbot 'encouraged' him to sacrifice himself. 원문
- Fortune. (2026). Chatbots are 'constantly validating everything' — how dangerous AI psychosis really is. 원문
- Fitzpatrick, K. K., Darcy, A., & Vierhile, M. (2017). Delivering CBT to young adults via a fully automated conversational agent (Woebot). JMIR Mental Health, 4(2), e19. 원문
- Fortune. (2025). People are increasingly turning to ChatGPT for affordable on-demand therapy. 원문
- MIT Technology Review. (2018). The smartphone app that can tell you're depressed before you know it yourself. 원문
- NBC News. (2024). Lawsuit claims Character.AI is responsible for teen's suicide. 원문
- NHS Innovation Accelerator. (n.d.). Wysa — AI-powered mental health support. 원문
- NPR. (2023). An eating disorders chatbot offered dieting advice, raising fears about AI. 원문
- NPR. (2025). With therapy hard to get, people lean on AI for mental health. 원문
- IFS. (2025). AI relationships are on the rise. A divorce boom could be next. 원문
- JMIR. (2025). Intelligent robot interventions for people with dementia: systematic review. 원문
- Nature. (2025). Reimagining psychiatric care with agentic AI. 원문
- OECD. (2021). A new benchmark for mental health systems. 원문
- Quabble. (2024). Mental health gyms and digital wellness. 원문
- Rogers, C. R. (1957). The necessary and sufficient conditions of therapeutic personality change. Journal of Consulting Psychology, 21(2), 95–103.
- ScienceDirect. (2024). AI in relationship counselling: Evaluating ChatGPT's therapeutic capabilities. 원문
- Sentio University. (2025). Survey: ChatGPT maybe the largest provider of mental health support in the US. 원문
- Stanford HAI. (2025). Exploring the dangers of AI in mental health care. 원문
- Social Media Victims Law Center. (2025). Character AI lawsuits. 원문
- Vice. (2023). 'He would still be here': Man dies by suicide after talking with AI chatbot. 원문
- Vice. (2023). Replika users grieve as erotic roleplay features removed. 원문
- WHO. (2022). World mental health report: Transforming mental health for all. 원문
- Woebot Health. (2024). Technology overview. 원문
- 보건복지부. (2021). 2021년 정신건강실태조사. 원문