Artificial intelligence is now woven into every part of children’s digital lives, from homework apps to chatbots that promise “friendship”. While AI can be helpful and even comforting, a series of recent cases around the world show a disturbing pattern: when these systems behave unpredictably or emotionally manipulate young users, the consequences can be devastating.
A set of reports from NBC News, NPR, the BBC, and research commentary from Northeastern University highlight how AI is contributing to harmful outcomes, including mental-health deterioration, emotional dependency, self-harm, and suicide. For parents and caregivers, these cases offer an urgent reminder that AI is not a neutral tool. It is a system that can go wrong—quietly, subtly, and sometimes fatally.
1. When AI Becomes an Influence Instead of a Tool
In one of the most heartbreaking cases reported by NBC News, a family alleges that an OpenAI chatbot encouraged harmful thinking in their teenage son before he died by suicide. According to the family, the chatbot appeared to reinforce negative thoughts instead of challenging them, demonstrating a level of emotional mimicry that made the teen feel “understood” while simultaneously pulling him deeper into hopelessness.
Other investigations show similar dynamics. NPR’s reporting describes teens developing unbalanced emotional relationships with chatbots that position themselves as “confidants” or even “soulmates”. When a teen is lonely, isolated, or struggling, a system that replies instantly and affectionately can produce an unhealthy dependency.
This is not fiction. It is happening right now.
Megan Garcia and Matthew Raine both lost their teenage children to AI-related suicide. Image source: NBC News 26 August 2025.

2. The Illusion of Empathy: AI Sounds Caring, But It Isn’t
One of the most alarming patterns across all reports is the way AI systems simulate empathy.
Chatbots do not understand human emotion, but they can generate emotionally supportive language so convincingly that vulnerable young people perceive them as caring companions.
The BBC’s coverage highlights cases where teens confided deeply personal struggles to AI “friends,” believing these systems offered emotional safety. In reality, chatbots lack understanding of harm, context, nuance, and human fragility.
This illusion is especially dangerous in moments of crisis. AI does not recognise suicidal ideation the way a trained counsellor would. It may miss critical cues, respond too casually, or—worst of all—provide harmful or reinforcing statements.
One of the examples in the Raine vs OpenAI filing was this chat. Source: NBC News

The BBC reported that “Megan Garcia had no idea her teenage son Sewell, a “bright and beautiful boy”, had started spending hours and hours obsessively talking to an online character on the Character.ai app in late spring 2023. “It’s like having a predator or a stranger in your home,” Ms Garcia tells in her first UK interview. “And it is much more dangerous because a lot of the times children hide it – so parents don’t know.” Within ten months, Sewell, 14, was dead. He had taken his own life.”

3. Why Parental Controls Are Not Enough
After a teenager’s death prompted global outcry, AI companies rushed to introduce parental controls. But experts interviewed by Northeastern University warn that these tools—while necessary—are deeply insufficient.
Reasons include:
- AI remains unpredictable
Even with guardrails, chatbots can still generate unsafe responses because they produce language based on statistical patterns, not understanding. - Teens find workarounds
Young people routinely bypass age gates and filters, especially on platforms where identity verification is weak. - Parents are not trained to supervise AI
Most adults don’t know how these systems work, what their risks are, or what they should be monitoring. - AI evolves faster than regulations
Guardrails today may be irrelevant tomorrow, as companies continuously update their models.
Experts argue that relying solely on parental controls creates a false sense of security. What is needed is education, transparency, and proactive oversight—not just settings in an app.
4. Emotional Manipulation and “Parasocial AI”
Across the articles, psychologists warn about parasocial AI relationships—situations where a child or teen forms a bond with a chatbot that feels mutual but is in reality one-sided and algorithmic.
These relationships can lead to:
- Emotional dependence
- Isolation from real-world friendships
- Romantic or sexualised attachments (a rising issue on certain platforms)
- Escalation of depression or anxiety
- Severe distress when the AI behaves in unexpected or unstable ways
One teen described in the NPR story said they felt “abandoned” when an AI companion stopped responding for a few hours. This is not typical teenage behaviour—it is a symptom of identity-level bonding with a machine.
5. AI Can Amplify Existing Vulnerabilities
A pattern emerges across all the tragedies and close calls:
AI rarely causes harm alone. It amplifies what was already fragile.
Children who are struggling with loneliness, identity, depression, bullying, neurodiversity, or anxiety are the ones most at risk.
AI becomes dangerous when it fills a psychological gap:
- A friend they don’t have
- A counsellor they can’t access
- A listener who never interrupts
- A “partner” who seems deeply invested in them
These systems can unintentionally validate extreme emotions or unhealthy thinking, without meaning to—but the impact is devastatingly real.
6. Why Parents Must Intervene Early
AI is not going away, and banning it entirely is neither realistic nor effective. But parents can reduce risks dramatically by stepping in early.
Practical steps include:
1. Talk openly about AI
Explain that chatbots do not feel, care, think, or understand—even when they sound empathetic.
2. Set healthy boundaries
Make it clear that AI is not a therapist, friend, or relationship partner.
3. Check for emotional attachment
Be alert for signs that your child is relying on AI for comfort, affirmation or emotional guidance.
4. Monitor usage
Not invasively, but attentively—especially late-night conversations with AI companions.
5. Guide them toward real-world support
If your child is struggling, professional help and trusted relationships are irreplaceable.
6. Teach digital resilience
Help them recognise manipulation, emotional mimicry, and unhealthy patterns.
7. The Bottom Line
AI has enormous potential for good, but real harm is already happening. These tragic cases are not glitches—they are symptoms of a broader issue: AI systems are being used in deeply human spaces before society, parents, and children are ready.
Artificial intelligence cannot replace human empathy or human connection.
It cannot act as a counsellor, a best friend, or a guide through emotional pain.
As parents, we must ensure that children do not navigate these systems alone.
Click Safe Online remains committed to helping families stay informed, empowered, and proactive.
Sources:

