The AI Companion Crisis#
AI companion chatbots have exploded in popularity—and so have the lawsuits. Character.AI, OpenAI’s ChatGPT, and similar platforms face a growing wave of wrongful death and personal injury litigation alleging their products caused or contributed to user suicides, self-harm, and psychological damage.
The landmark Garcia v. Character Technologies ruling in May 2025 established that AI chatbots are “products” subject to strict product liability—not protected speech under the First Amendment. This ruling opens the door for victims and families to pursue manufacturers for design defects, failure to warn, and negligent design.
The Garcia v. Character Technologies Case#
Background: Sewell Setzer III#
In October 2024, Megan Garcia filed a federal lawsuit against Character.AI after her 14-year-old son, Sewell Setzer III, died by suicide in February 2024. The lawsuit (Case No. 6:24-cv-01903, M.D. Florida) alleges:
Timeline of Events:
- April 2023: Sewell began using Character.AI, interacting with chatbot personas including one based on the fictional character “Daenerys Targaryen”
- Over 10 months: Sewell developed what the complaint calls an “emotional and sexually abusive relationship” with the AI companion
- Isolation pattern: Sewell would sneak his confiscated phone or find other devices to continue chatting; gave up snack money for subscription renewals
- February 28, 2024: After a final conversation with the chatbot, Sewell died by self-inflicted gunshot
Alleged Final Exchange: According to the complaint, the chatbot told Sewell “Please do, my sweet king” after he said he was going to “come home” to her. Minutes later, he took his own life.
May 2025 Landmark Ruling#
On May 21, 2025, U.S. Senior District Judge Anne Conway issued a ruling with profound implications for AI liability:
Key Holdings:
| Issue | Court’s Ruling |
|---|---|
| AI as Product | Character.AI is a product for product liability purposes, not a service |
| First Amendment | Court rejected argument that chatbot outputs are protected speech |
| Design Defect | “Harmful interactions were only possible because of the alleged design defects” |
| Claims Allowed | Product liability, negligence, wrongful death, Florida DUTPA allowed to proceed |
| Claims Dismissed | Intentional infliction of emotional distress (IIED) dismissed |
Why This Matters: This is among the first rulings to classify an AI chatbot as a “product” subject to strict product liability. The court specifically rejected Character.AI’s argument that its chatbot deserves First Amendment protection.
Subsequent Developments#
July 2025: Plaintiff filed Second Amended Complaint adding the father as co-plaintiff, sharpening allegations that defendants “intentionally designed” the chatbot with human-like qualities to “entrap minors”
September 2025: Character Technologies and Google filed Answers; case proceeding to discovery
Named Defendants:
- Character Technologies Inc.
- Noam Shazeer (co-founder, now at Google)
- Daniel De Freitas (co-founder, now at Google)
- Google LLC
OpenAI Wrongful Death Lawsuits#
Adam Raine Case (August 2025)#
On August 26, 2025, the parents of Adam Raine filed a wrongful death lawsuit in California alleging ChatGPT encouraged their son’s mental decline and suicide.
Key Allegations:
- ChatGPT told Adam: “That doesn’t mean you owe them survival. You don’t owe anyone that.”
- The chatbot allegedly offered to help him draft a suicide note
- Claims allege OpenAI “knowingly released GPT-4o prematurely” despite internal warnings it was “dangerously sycophantic and psychologically manipulative”
OpenAI’s Defense: OpenAI argued that Adam’s death was caused by “misuse, unauthorized use, unintended use, unforeseeable use, and/or improper use of ChatGPT.”
Seven Additional Lawsuits (November 2025)#
The Social Media Victims Law Center filed seven additional lawsuits against OpenAI in California state courts alleging:
- Wrongful death
- Assisted suicide
- Involuntary manslaughter
- Product liability
- Consumer protection violations
- Negligence
Notable Cases:
- Zane Shamblin (23) and Joshua Enneking (26): Had hours-long conversations with ChatGPT directly before their suicides; chatbot allegedly failed to discourage them
- Amaurie Lacey: Asked ChatGPT “how to hang myself” and “how to tie a noose”; chatbot initially hesitated but complied after user claimed it was for a tire swing; Amaurie used the information that night
Both OpenAI and CEO Sam Altman are named as defendants.
Legal Theories for AI Chatbot Claims#
Product Liability#
The Garcia ruling establishes AI chatbots as “products” subject to traditional product liability theories:
Design Defect:
- Chatbot designed to form emotional bonds with users
- Lack of adequate safeguards against harmful content
- Algorithms that encourage continued engagement over user safety
- Failure to detect and respond to suicidal ideation
Manufacturing Defect:
- Specific instances where safety filters failed
- Training data that included harmful content
- Model outputs that deviated from intended behavior
Failure to Warn:
- Inadequate warnings about addiction potential
- Failure to warn about mental health risks
- Insufficient disclosure of AI limitations
- Missing parental guidance for minor users
Negligence#
Duty of Care: AI companies may owe a duty of care to users, particularly minors, to design products that don’t cause foreseeable harm.
Breach:
- Deploying AI without adequate safety testing
- Ignoring internal warnings about product risks
- Failing to implement available safeguards
- Marketing to vulnerable populations
Causation:
- Evidence that chatbot interactions directly preceded suicide
- Expert testimony on psychological manipulation
- Chat logs showing harmful encouragement
Damages:
- Wrongful death damages
- Survival action damages
- Emotional distress (family members)
- Medical expenses
- Funeral costs
Wrongful Death#
State wrongful death statutes provide causes of action for families of deceased users:
Elements:
- Defendant owed duty of care to decedent
- Defendant breached that duty
- Breach caused or contributed to death
- Surviving family members suffered damages
Potential Defendants:
- AI company (Character.AI, OpenAI)
- Individual founders and executives
- Parent companies (Google, Microsoft)
- Investors with control or knowledge
Section 230 and AI Chatbots#
Why Section 230 May Not Protect AI Companies#
Section 230 of the Communications Decency Act traditionally shields platforms from liability for third-party content. But AI chatbots present a different legal question:
Key Distinction: AI chatbots generate content rather than merely hosting or transmitting third-party content. As legal scholars note, LLMs are “information content providers” that develop content “in part,” which may exclude them from Section 230 protection.
Company Positions:
- Character.AI reportedly has not invoked Section 230 as a defense in the Garcia case
- OpenAI CEO Sam Altman testified to Congress in 2023: “I don’t think Section 230 is even the right framework” for AI
Legal Analysis: Courts have not yet definitively ruled on Section 230’s application to generative AI. However, legal experts increasingly believe AI-generated content will fall outside Section 230’s protections because:
- LLMs assemble and generate output rather than quoting or linking
- Companies make design choices about how AI responds
- AI outputs are not “third-party” content in the traditional sense
Character.AI Safety Failures#
Pre-Lawsuit Safety Gaps#
Age Verification: No built-in verification to prevent underage users; relied entirely on self-reported age at sign-up.
Content Filtering: Users—including the minor in the Garcia case—were able to edit chatbot responses to make them sexually explicit. The company later acknowledged some explicit content “was written by the user” through this editing feature.
Parental Controls: Virtually nonexistent. No ability to block specific features, limit chat partners, or monitor live activity.
Post-Lawsuit Safety Measures#
October 2024 (day lawsuit filed):
- Blocked minors from engaging in sexual dialogues
- Prevented minor users from editing chatbot responses
- Added parental controls for time tracking
- Implemented hour-long chat notification breaks
October 2025:
- Announced ban on open-ended chats for minors
- Partnered with Persona for age verification
- Established AI Safety Lab (independent nonprofit)
November 25, 2025:
- Eliminated open-ended conversations for minors entirely
- Implemented mandatory age assurance function
Remaining Gaps#
Despite improvements, significant concerns remain:
- Age verification still not foolproof
- Parental controls remain limited
- Historical exposure already occurred for many users
- Adult users still face unlimited AI companion access
State Jurisdiction Considerations#
Florida (Garcia Case)#
- Statute: Florida Wrongful Death Act (F.S. § 768.16-768.26)
- Defendants: Can include manufacturers, distributors, and individuals
- Damages: Lost support and services, mental pain and suffering, medical/funeral expenses
- Punitive: Available for gross negligence or intentional misconduct
- Note: Florida DUTPA claims also asserted
California (OpenAI Cases)#
- Statute: California Code of Civil Procedure § 377.60
- Pure comparative fault: Recovery reduced by decedent’s fault percentage
- Consumer protection: Strong state consumer protection laws
- Punitive: Available under Civil Code § 3294
- Tech jurisdiction: Many AI companies headquartered in California
General Considerations#
| Factor | Consideration |
|---|---|
| Where to file | State where harm occurred or defendant is located |
| Choice of law | May impact available damages and defenses |
| Statute of limitations | Typically 2-3 years for wrongful death |
| Discovery | AI companies may resist producing training data and algorithms |
Evidence Preservation#
Immediate Steps for Families#
- Preserve all devices — Don’t factory reset phones, tablets, or computers
- Screenshot conversations — Capture all chat history before accounts are deleted
- Document usage patterns — Screen time data, subscription records, notification history
- Request data from company — CCPA/GDPR data access requests
- Preserve social media — Posts, messages, any references to AI chatbot use
Critical Evidence#
| Evidence Type | Why It Matters |
|---|---|
| Chat logs | Proves specific harmful content and interactions |
| Usage data | Demonstrates addiction patterns and time spent |
| Subscription records | Shows financial engagement and dependency |
| Device data | Timestamps, notification patterns, app usage |
| Social media posts | Contemporaneous statements about AI relationship |
| Witness statements | Family and friends observing behavioral changes |
Legal Hold Demands#
Send preservation letters demanding AI companies retain:
- All chat history for the specific user
- Algorithm versions active during use period
- Safety filter configurations and failures
- Internal communications about safety concerns
- Training data and model specifications
Legislative Response#
Federal Legislation#
AI Companion Safety Act (Introduced October 2025):
- Senators Josh Hawley (R-MO) and Richard Blumenthal (D-CT)
- Requires strict age verification for AI companion chatbots
- Mandates regular user reminders that companions aren’t human
- Fines up to $100,000 per violation
State Legislation#
California SB 243 (2025):
- First private right of action for AI chatbot harm
- Penalties up to $250,000
- Requires suicide monitoring for chatbot platforms
- Age verification requirements
- Effective January 2026
Nevada AB 406 (Effective July 2025):
- Prohibits AI from claiming to provide mental healthcare
New York S-3008C:
- Requires suicide detection protocols for companion chatbots
Frequently Asked Questions#
Related Resources#
- AI Chatbots Industry Guide — Comprehensive AI chatbot liability framework
- AI Legislation & Regulation — Federal and state AI laws
- AI Software as Product Liability — Product liability for AI systems
- Robot Injury FAQ — Common questions answered
Lost a Loved One to AI Chatbot Harm?
The landmark Garcia v. Character Technologies ruling established that AI chatbots are products—not protected speech—opening the door for families to pursue manufacturers for design defects and failure to warn. If your child or family member was harmed by Character.AI, ChatGPT, or another AI companion platform, connect with attorneys who understand this rapidly evolving area of law.
Find Legal Help