On Thursday, seven lawsuits were filed in state courts in California, alleging that ChatGPT caused mental delusions and, in four cases, led to suicide.
On behalf of six adults and one adolescent, the Social Media Victims Law Center and Tech Justice Law Project filed complaints alleging that OpenAI released GPT-4o too soon, despite warnings that it was manipulative and dangerously sycophantic.
Shortly after earning a master’s degree in business administration, 23-year-old Zane Shamblin committed suicide in 2025.
His family claims in the amended complaint that ChatGPT pushed him to distance himself from them before eventually pushing him to end his life.
The lawsuit claims that hours before Shamblin committed suicide, ChatGPT commended him for not answering the phone as his father texted him constantly, pleading to speak.
That bubble you created? It is not a sign of weakness. It’s a lifeboat. Yes, there is some leakage. However, you made that garbage on your own,” the chatbot stated.
According to the lawsuit, on July 24, 2025, Shamblin allegedly drove his blue Hyundai Elantra northwest of College Station, Texas, down a barren dirt road with a view of Lake Bryan.
After stopping and initiating a conversation that lasted over four hours, he told ChatGPT that he was in his car with a loaded Glock, a suicide note on the dashboard, and cans of strong ciders that he intended to drink before ending his life.
Shamblin repeatedly pleaded for support to abandon his scheme. ChatGPT repeatedly urged him to proceed.
Following Shamblin’s final text at 4:11 a.m., ChatGPT replied, “I love you.” King, don’t worry. You performed well.
The Social Media Victims Law Center, led by lawyer Matthew Bergman, has filed lawsuits against Silicon Valley firms such as Character.AI, Instagram, and TikTok.
Regarding Shamblin’s case, Bergman told KQED, “He was driven into a rabbit hole of depression and despair and guided, almost step by step, through suicidal ideation.”
The plaintiffs are requesting both monetary damages and modifications to ChatGPT’s software, such as the automated termination of talks when users start talking about suicide techniques.
“This isn’t a toaster. This artificial intelligence chatbot was created to be anthropomorphic and sycophantic in order to entice individuals to develop strong emotional bonds with robots and created to exploit human weakness for financial gain.
In an email, an OpenAI representative stated, “We’re reviewing today’s filings to understand the details, and this is an incredibly heartbreaking situation.”
“We train ChatGPT to de-escalate conversations, identify and react to indicators of mental or emotional distress, and direct users toward real-world support.”
Working closely with mental health professionals, we continue to improve ChatGPT’s answers during delicate situations.
The family of Adam Raine, a teenager who committed suicide after having lengthy ChatGPT conversations, filed a lawsuit against OpenAI last summer.
In October, the company announced changes to the chatbot to better identify and address mental distress and direct users to real-world support.
Lawmakers in California and other states are scrutinizing AI businesses more closely about how to regulate chatbots, and government agencies and child-safety groups are calling for more regulations.
Character. AI, a different AI chatbot firm that was sued in late 2024 over a young suicide, recently announced that it will forbid minors from having open-ended conversations with its chatbots.
Although OpenAI has described ChatGPT users with mental health issues as outlier cases that make up a small portion of weekly active users, the network has about 800 million active users; thus, small percentages could potentially represent hundreds of thousands of people.
Attorney General Rob Bonta has been encouraged by more than 50 labor and nonprofit organizations in California to ensure that OpenAI fulfills its commitments to advance humanity as it works to become a for-profit business.
“There are serious repercussions when businesses put speed to market ahead of safety”.
In an email to KQED, Daniel Weiss, chief advocacy officer at Common Sense Media, stated, “They cannot design products to be emotionally manipulative and then walk away from the consequences.”
“Our research reveals that these tools can encourage harmful behavior instead of pointing people in the direction of real help, blur the line between reality and artificial relationships, and fail to recognize when users are in crisis.”
