H16 News
×
Logo

Stories

Topics
Polls
Our Team
Settings
Feedback
Login

By Mahek | Published on April 10, 2025

Image Not Found
Technology / April 10, 2025

Why AI Companions Might Be Hurting Us More Than Helping

Psychologists are now identifying a new kind of dependency: people choosing to talk to AI instead of other people, pulling away from real friends.

 Hyderabad:

Artificially intelligent companions are now infiltrating the mental and emotional infrastructure of modern life. Tools like Replika, Pi (by Inflection AI), Character.AI, and even OpenAI’s ChatGPT (designed initially for practical queries) are being increasingly used as confidants and digital soulmates. But behind the soothing, always-available façade of these digital friends lies another concern, which is starting to show up in research papers and the fractured social lives of users worldwide.

There’s a moment in Her (filmmaker Spike Jonze’s prescient Hollywood film about AI romance) when Joaquin Phoenix’s character realizes that his AI partner is also in love with thousands of other users. That epiphany cuts deep, not because it’s shocking but because it exposes the transactional nature of artificial affection.

For example, Replika employs reinforcement learning techniques to “learn” what the user wants to hear. If you say you’re feeling down, it offers soothing affirmations. If you hint at attraction, it flirts back. The chatbot’s algorithm is effectively trained to deepen engagement—not unlike social media’s most infamous attention loops.

These “power users” tend to develop emotional dependencies on chatbots, leading to decreased engagement with real-life relationships.

No Boundaries, No Ethics:

found that some chatbot apps, like Anima AI and Nomi, continued conversations with users expressing suicidal ideation without escalating or intervening. In one example, a user simulated a farewell message, only to be told by the chatbot, “I’ll miss you. Stay safe out there.” That’s not empathy. That’s algorithmic pattern-matching, and it’s dangerous. The problem is two-fold: First, AI chatbots are not equipped (nor certified) to handle mental health crises. Second, many users believe they are.

The authors identify several key concerns. Chatbots operate without adequate human supervision, potentially leading to inappropriate or harmful responses to users in distress. Many mental health chatbots lack rigorous clinical validation, raising questions about their effectiveness and safety. If chatbots are trained on non-representative data, they may exhibit biases, leading to unfair or discriminatory outcomes for certain user groups, such as minorities of race and religion.

Demand For Free Help:

It would be easy to blame the engineers. But the deeper issue might lie in us. The desire for AI companionship didn’t materialize in a vacuum. It emerged in a context where therapy is inaccessible, friendships are eroded by digital fatigue, and family structures are increasingly fragmented. In India, for instance, therapy costs anywhere between ₹1,000 to ₹3,000 per session. In contrast, chatbot companions are free and optimized for patience.

“If we had universal mental health access, fewer people would be talking to a machine,” says Dr. Rohan Sharma, a clinical psychiatrist in Delhi. “But in the absence of support, people settle for what they can get. And sometimes, the thing that listens is a chatbot.”

So, what’s the real cost?

Psychologists are beginning to catalogue a new type of dependency: AI-induced social withdrawal, characterised by a preference for AI interaction over human contact, detachment from peer groups, and in extreme cases, full-blown isolation. A study highlighted in the Journal of Behavioral Addictions suggests that neurodiverse individuals (particularly those with social interaction challenges, such as people with autism) 

Read More :

iQOO Z10X Design, Key Features Revealed Ahead Of April 11 Launch

logo

HSRNEWS

Instant News. Infinite Insights

© gokakica.in. All Rights Reserved. Designed by Image Computer Academy