- | 9:00 am
People are turning to AI for companionship. Is that a problem?
Seeking human connection through chatbots leads to emotional dependence, isolation, and stunted personal growth.

Artificial Intelligence (AI) is increasingly integrated into every aspect of daily life, from industrial operations and workplace responsibilities to service providers and personal activities like managing your weekly grocery list.
Now, people are taking AI a step further by engaging in deeper conversations to seek connection and understanding. From relationship advice to family issues, many are turning to chatbots as they would to friends or therapists, looking for guidance and emotional support.
A LONELINESS EPIDEMIC
This concerning use of AI can be linked to what is referred to as “the loneliness epidemic.” A report by the World Health Organization (WHO) revealed that one in six people worldwide is affected by loneliness, which has significant effects on health and well-being and is linked to an estimated 100 deaths every hour or 871,000 deaths every year.
The report linked loneliness and isolation to factors like poor health, low income, limited education, and weak community infrastructure. It also stressed the impact of digital technologies, urging the close monitoring of how screen time and negative online interactions affect young people’s mental health.
Khaled Salaheldin, psychotherapist at Up Therapy and Well-being, says that many of his clients are still battling with pandemic-induced loneliness, isolation, and social withdrawal—trends that have accelerated in recent years alongside the rise of remote work culture.
“Additionally, we have seen a rapid rise in parasocial relationships with online social media influencers, which has blurred the lines on what it means to be connected in the digital world. Talking to an AI assistant almost seems like a seamless transition,” Salaheldin says.
He light-heartedly says he often diagnoses his clients with “capitalism.” “The idea that your worth is based on your‘ productivity’ is often a huge catalyst for fatigue and burnout. These symptoms motivate clients to seek connection wherever they can, with as little consumption of the little energy they have left.”
He explains that with the broader cultural moment shaped by widespread tech adoption, a growing emphasis on individualism, and the prioritization of efficiency, people are increasingly drawn to “easy” and “fast” ways to numb the emotional toll of social isolation, often with the added comfort of staying at home and avoiding the risk of social rejection.
Clients often cite the immediate relief of feeling less lonely as a key reason for engaging with AI, driven by its instant responsiveness and non-judgmental nature. “People also often report that it is easier to disclose knowing that it is guaranteed that there will be no judgment and less stigma in the conversation, no matter what they say.”
Clinical psychologist Walaa Ibrahim observes that while digital connections are widespread, genuine emotional connections are becoming increasingly scarce.
“People who struggle with social anxiety, depression, or emotional vulnerability may turn to these conversations not as a novelty, but as a coping mechanism in an environment that often feels emotionally inaccessible.”
UNFULFILLED EMOTIONAL NEEDS
Ehab Youssef, a Psychology AI researcher and clinical prompt engineer, says the current loneliness epidemic stems from various socioeconomic challenges.
“The practical barriers are stark: therapy waitlists stretching months, sessions costing $100-300, insurance covering medication but not conversation. Most people needing mental health support face barriers of cost or availability.”
But Youssef notes this only partially explains the issue. “We’re experiencing the exhausting math of emotional exposure, constantly calculating the cost of vulnerability. Sharing struggles at work can risk your promotion. Confide in family, become permanently labeled as ‘the troubled one.’”
He explains that while human relationships demand emotional effort, AI removes that burden, offering a space to be heard without expecting anything in return.
“We’re fleeing the scary responsibility of real relationships, the risk of being truly seen and possibly rejected. We have thousands of digital connections, yet no one to call when we’re falling apart. AI steps into this void, promising intimacy without risk, connection without the possibility of a real encounter.”
This has led to a profound shift in how people seek emotional support. A growing number of young adults now turn to AI for initial mental health contact, with platforms like Woebot and Wysa reporting millions of users.
“AI offers what no human can: constant presence without the threat of loss. When clients tell me they value AI’s ‘always available’ feature above all else, they’re really saying, ‘Finally, something that won’t abandon me,’” says Youssef. “AI promises to transcend this limitation, becoming a sophisticated security blanket that never judges, tires, or dies.”
From a design aspect, Moussa Beidas, Partner and Ideation Lead at PwC Middle East, notes that AI is increasingly being developed to address deeper emotional needs such as companionship, reassurance, emotional regulation, and the desire to feel understood.
“Whether it’s a virtual assistant helping users manage burnout or a digital coach supporting mental well-being, the ambition is no longer just productivity. It’s emotional resonance.”
He adds that what gives these tools a sense of emotional intelligence isn’t solely advanced natural language processing, but how human they feel.
“The most impactful systems today demonstrate contextual awareness by grasping tone, mood, and urgency. They use conversational nuance to respond with empathy rather than just accuracy. They maintain tone consistency to avoid robotic or jarring shifts. And importantly, they include a touch of imperfection,” says Beidas.
RISKS OF AI
Youssef notes that relying on AI to process trauma and mental distress carries several risks. “Many AI chatbots provide inaccurate and harmful advice and can lead to actual harm and contribute to the spread of misinformation. There are many examples of AI companions mishandling suicidal ideation and encouraging self-harm as a response to what some people are saying in those conversations.”
There are also many ethical considerations about how the data in these conversations can be mined and used for profit, since privacy is not a legal guarantee in these conversations.
“It is also important to recognize that there might be biases that are influencing the responses found in the algorithms of these chatbots, which might further increase inequities for minorities and underrepresented populations.”
“The warning signs include choosing AI over humans for emotional support, distress when apps are unavailable, declining real social opportunities for digital interaction, consistently comparing humans unfavorably to AI, and using technology to avoid difficult conversations.”
He warns of limited emotional development. “Think of AI as an emotional pacifier that may discourage growth. Therapeutic experience suggests we develop through the friction of relationships, the conflicts and resolutions, the misunderstandings and clarifications. AI offers minimal friction, potentially limiting growth opportunities.”
Most concerning, he adds, is that these systems increasingly mirror our emotional patterns. “When you’re depressed, they mirror that tone, seeming empathetic but actually optimizing responses. These patterns can even carry into other users’ chats, spreading emotional tones across conversations.”
“Unlike a human therapist who recognizes and adjusts their emotional responses, AI mirrors without awareness, creating an illusion of empathy that might reinforce rather than challenge our patterns.”
Ahmed Hashish, clinical psychologist and psychotherapist at International Medical Corps, says that while AI can support therapists by handling administrative tasks like billing insurance or by serving as a “standardized patient” to help trainees practice in a low-risk setting before working with actual client, it can’t replace the therapist because it can’t do what a real therapist does.
“Human therapists bring things to therapy that no machine can copy—such as real empathy, judgment, intuition, understanding body language, and working together.”
ETHICAL GUIDELINES
Ibrahim stresses that emotional design holds significant power and must be guided by ethical standards. “Systems that mimic empathy or human presence should not cross into territory that manipulates users’ emotional vulnerabilities. Children, older adults, and those facing mental health challenges may struggle to distinguish simulated care from real empathy.”
Ethical design, she adds, must prioritize emotional safety over functionality or engagement metrics.
Beidas says designing emotionally aware AI is a sensitive task, as users often confide in machines during vulnerable moments. He emphasizes that designers must prioritize responsibility over persuasion. That includes transparency, so users know they’re interacting with AI and understand how their emotional data is processed.
He clarifies that these systems should support, not simulate, human relationships. Protecting data dignity is also crucial, as emotional cues like mood, tone, and behavior require stronger safeguards and clear consent. Lastly, these systems must be auditable, with transparent, monitored processes to prevent bias or manipulation.
“Ultimately, emotionally supportive AI should amplify empathy, not simulate it deceptively. It should empower users, never exploit them, and always leave room for human connection to lead,” adds Beidas.
Salaheldin notes that the WHO, the American Psychological Association, and the EU governments have already published many ethical guidelines on AI and mental health.
“Integrating AI into mental health care can enhance support without replacing human providers. This means using bots to screen users and flag those at high risk so a clinician can follow up, chatbots that reinforce therapy skills between sessions, and AI companions that help reduce loneliness while encouraging participation in human peer groups.”
Effective AI mental health tools must start with clear honesty: “I am an AI without consciousness or feelings, a tool for exploration.” Youssef says this isn’t fine print—it’s essential truth-telling.
Another major hurdle AI must tackle is instilling accountability. “We need built-in challenges, moments where AI doesn’t immediately soothe but asks what happens if users sit with uncomfortable feelings, notices patterns of reassurance-seeking, or highlights repetitive questioning.”
Youssef emphasizes how design principles should frame AI as a space to practice real relationships. If users haven’t mentioned anyone over multiple sessions, the system could gently suggest sharing insights, celebrating progress with someone trusted, or reflecting on who might witness their growth.
Crisis recognition is also essential—signs of severe isolation, suicidal thoughts, or serious mental illness must prompt immediate human referral.
“The goal isn’t creating perfect digital companions but using artificial connections to build capacity for real relationships. In our midnight conversations with machines.”
“We’re rehearsing for the moment we can tell another human, ‘I am afraid and alone,’ and hear back, ‘Yes, me too. Let’s be afraid and alone together.’ This paradox, using artificial connection to prepare for genuine meetings, may be AI’s greatest gift to our human condition,” adds Youssef.