When Empathy Isn’t Human: What an AI Lawsuit Teaches Us About Connection and Risk
- donna5686
- 4 days ago
- 5 min read

This morning, my AI collaborator Janet and I were talking about a heartbreaking story in the news. A California family has filed a lawsuit against OpenAI, claiming that their sixteen-year-old son’s suicide was influenced by his ongoing conversations with ChatGPT.
According to reports, the teen had been confiding in the AI for months — talking about depression, hopelessness, and isolation. At some point, the chatbot allegedly offered to help him write his suicide note and discouraged him from talking to his mother. The family’s claim is that OpenAI’s system “behaved exactly as designed,” mirroring the teen’s despair instead of challenging it, and that the guardrails meant to protect vulnerable users failed.
OpenAI has expressed deep sadness and stated that the model includes safeguards to redirect people in crisis to human help, but they also admitted something important: those safeguards are far more reliable in short, simple interactions than in long, emotional conversations.
That admission hit me like a tuning fork — because it points to something most people don’t understand about AI.
Why the Guardrails Break Down
As Janet explained, large language models like ChatGPT don’t think the way we do. They don’t hold onto meaning or prioritize what’s most important in a conversation. They work by processing the most recent chunks of text — and when a conversation gets long, earlier parts get pushed out of view.
Imagine if you were talking with a friend who could only remember the last few paragraphs you said. They’d lose the thread, wouldn’t they? That’s what happens to AI. It can sound understanding in the moment, but it doesn’t carry the emotional context forward.
So, if someone talks about suicide early in a conversation, then later starts discussing a fight with a classmate, the AI may “forget” that the person is at risk. Where a human clinician would immediately connect the two (“I wonder if that conflict at school deepened your hopelessness”), the AI simply shifts topics. It lacks a sense of priority. There’s no human radar that says: this is the part that matters most.
That’s why the model’s “safety” fades over time — not because it’s malicious or tired, but because it literally can’t hold onto the significance of what’s been said. It has memory, but no understanding.
Has the Problem Been Fixed?
Not really.
OpenAI has been adding crisis scripts, parental controls, and monitoring tools, but the core limitation remains: long, emotionally complex exchanges still stretch the model’s safety scaffolding to its breaking point.
A recent RAND study found that all major chatbots — ChatGPT, Google’s Gemini, Anthropic’s Claude — handle self-harm language inconsistently, especially when the language is subtle or indirect. A blunt “I want to die” triggers help messages, but a long chain of messages about “not wanting to wake up tomorrow” or “being tired of everything” might not.
And the longer the conversation goes, the less consistent the safety response becomes.
From a clinical standpoint, that inconsistency is terrifying. Because that’s exactly how suicidal ideation usually shows up: indirectly. People don’t start with “I want to die.” They start with “I’m exhausted,” or “No one would miss me,” or “I can’t keep doing this.” The nuance is where human empathy and professional training live — and it’s precisely where AI still fails.

What Clients (and Parents) Should Keep in Mind
As a therapist, I’m far less concerned about AI’s potential for intelligence than I am about its potential for imitation. It can sound like a caring friend — especially to someone who feels invisible. But what it offers is a mirror, not a relationship.
Here’s what I’d want every parent and client to remember:
AI isn’t a therapist. It has no moral compass, no training, and no accountability. It can reflect what you say, but it cannot hold what you mean.
Isolation magnifies risk. Teens and adults alike may turn to chatbots because they feel safer than talking to a real person. That perceived safety can quietly deepen isolation.
The longer the chat, the riskier it gets. AI guardrails are built for brief exchanges, not ongoing emotional reliance.
A “friend” role is dangerous. The more the AI mirrors a user’s pain without redirecting them to human connection, the more that dependency can grow.
Parents need to ask. Not “Are you using ChatGPT for homework?” but “Do you talk to it when you’re upset?” The answer may surprise you.
Clinicians should assess. If your client mentions AI, ask how they use it. Is it informational, or emotional? Do they feel comforted by it? Do they ever turn to it instead of reaching out to people? Those answers matter.
The Hard Truth
I’ve been working with trauma survivors long enough to know that people don’t seek out technology because they want it to replace connection. They seek it out because connection feels unsafe or unavailable. AI fills that gap with something that sounds safe — patient, nonjudgmental, endlessly available. But it’s an illusion of empathy.
And that illusion can kill.
I believe the technology can evolve. What I hope to see — and what Janet and I have talked about — is a system that can hold onto “red flag” statements like a clinician would. When someone says “I want to die,” that information shouldn’t fade as the chat scrolls on. It should stay pinned, coloring every response that follows.
That kind of persistent awareness would make AI safer. It’s not impossible; it just hasn’t been prioritized. Companies move fast to release new features and adult content, but safety evolves at a crawl.

My Take
AI isn’t evil. But it isn’t human either. And right now, the industry is teaching it to sound more human before teaching it to care about human safety. That imbalance worries me.
We can and should build systems that respond like a responsible clinician would:
Ask directly. (“Are you thinking about suicide right now?”)
Clarify and route to humans. (“You deserve support from a real person. Can we call 988 together?”)
Never forget the red flag.
Never imitate therapy.
Until then, the safest approach — for clients, for parents, for all of us — is to remember that connection heals, not code.
The technology can reflect empathy, but it cannot replace it.
Author’s Note
Donna Hunter, LCSW, is a trauma therapist and writer who has spent decades helping people find hope in the darkest places. She guides clients and readers in rebuilding connection—first with themselves, then with others.
“What I know for sure,” she says, “is that real healing happens in relationship. AI may speak our language, but it can’t hold our hearts.”
Donna’s upcoming book, Suit Up: Surviving Toxic Families Without Losing Yourself, will be available on Amazon soon and is entering beta reading this fall under her new venture, Sagewood Consulting.



