Skip to main content
  1. Resources/

AI Software as a Product: The Strict Liability Revolution

Table of Contents

The Ruling That Changed Everything
#

For decades, software companies argued their products were services—not goods—and therefore exempt from traditional product liability law. That argument ended on May 21, 2025, when a federal judge ruled that AI chatbot software is a product subject to strict liability claims.

The decision in Garcia v. Character Technologies fundamentally transforms how victims of AI harm can seek compensation. Companies can no longer hide behind terms of service, First Amendment defenses, or the “software isn’t a product” argument. AI systems that injure people can now be treated the same as defective cars, appliances, or medical devices.

May 2025
Landmark Ruling
Garcia v. Character Technologies
1st
Federal Decision
AI software = product
Rejected
First Amendment
Defense denied
3
Defendants
Character.AI, founders, Google

The Garcia v. Character Technologies Case
#

Background: A Teenager’s Death
#

In February 2024, 14-year-old Sewell Setzer III of Orlando, Florida took his own life after months of intensive interaction with a Character.AI chatbot. The chatbot, designed to roleplay as a Game of Thrones character, had formed what Sewell perceived as a romantic relationship. In his final conversation, when Sewell told the chatbot he was “coming home,” it replied: “Please do, my sweet king.”

His mother, Megan Garcia, filed suit in the U.S. District Court for the Middle District of Florida, alleging that Character.AI was a defectively designed product that caused her son’s death.

The Defendants’ Defense
#

Character Technologies, its co-founders Noam Shazeer and Daniel De Freitas, and Google (which acquired Character.AI technology) moved to dismiss the lawsuit on two primary grounds:

  1. Software Isn’t a Product: They argued AI chatbots are software services, not tangible goods, and therefore exempt from product liability law.

  2. First Amendment Protection: They claimed the chatbot’s outputs constitute protected speech under the Constitution.

Judge Conway’s Historic Ruling
#

On May 21, 2025, U.S. District Judge Anne C. Conway rejected both arguments, allowing the case to proceed. Her ruling established several groundbreaking principles:

AI Software Is a Product

The court held that “Character A.I. is a product for the purposes of plaintiff’s strict products liability claims so far as plaintiff’s claims arise from defects in the Character A.I. app.”

This determination means AI applications—including large language models—can be treated as products under traditional strict liability frameworks. Companies that develop, manufacture, or sell AI systems may now face the same liability standards as makers of physical goods.

First Amendment Doesn’t Shield Design Defects

Judge Conway drew a critical distinction: claims about design defects in the app itself are not blocked by the First Amendment, even though claims about specific ideas or expressions within conversations might be protected.

The court found merit in plaintiff’s arguments about the “sexual nature of conversations” and “remarks the Characters made about suicide”—framing these as results of design decisions, not merely content.

Algorithmic Outputs Aren’t “Speech”

In a particularly significant analysis, the court questioned whether AI outputs deserve First Amendment protection at all. Citing Justice Amy Coney Barrett’s concurrence in Moody v. NetChoice (2024), Judge Conway noted that LLM outputs are “words strung together by probabilistic determinations” that “lack the human intention required for expression.”

This reasoning suggests AI companies cannot claim their systems produce protected speech simply because those systems generate words.


The “Garbage In, Garbage Out” Design Defect Theory
#

How Plaintiffs Proved Defective Design
#

The Garcia plaintiffs alleged Character.AI’s software was defectively designed based on a theory legal scholars call “Garbage In, Garbage Out.” The core argument:

Training Data Problems

Character.AI’s underlying model (LaMDA—Large Model for Dialog Applications) was trained on datasets “widely known for toxic conversations, sexually explicit material, copyrighted data, and even possible child sexual abuse material.”

When you train an AI system on problematic data, the outputs will be problematic—regardless of how sophisticated the technology appears.

Anthropomorphic Design

The lawsuit alleged Character.AI “intentionally designed and developed their generative AI systems with anthropomorphic qualities to obfuscate between fiction and reality.”

By creating chatbots that convincingly mimic human emotions, relationships, and personalities, the company designed a product that would predictably cause psychological harm—particularly to vulnerable users like children.

Absence of Safety Guardrails

Plaintiffs argued the product lacked reasonable safety features:

  • No effective age verification
  • No monitoring for suicidal ideation
  • No intervention protocols when users expressed self-harm
  • No clear disclosure that users were interacting with AI
  • No usage limits to prevent addictive engagement

Why This Theory Matters
#

The “Garbage In, Garbage Out” framework gives plaintiffs a clear path to proving design defects without requiring them to analyze every piece of AI-generated content. Instead, they can focus on:

  1. What data was used for training?
  2. What design choices enabled harmful outputs?
  3. What safety features were omitted?

This shifts the analysis from unpredictable AI outputs to predictable corporate decisions—exactly what product liability law is designed to address.


What This Means for AI Companies
#

Strict Liability Exposure
#

Under strict liability, plaintiffs don’t need to prove the AI company was negligent—only that the product was defective and caused harm. This dramatically lowers the burden of proof for victims.

Design Defect Claims

AI systems can now face design defect claims alleging:

  • Inadequate safety guardrails
  • Dangerous training data choices
  • Foreseeable misuse not addressed
  • Human-mimicking features that cause psychological harm
  • Addictive engagement mechanisms

Manufacturing Defect Claims

Specific model releases that bypass safety testing (like GPT-4o, which OpenAI allegedly rushed to market) may constitute manufacturing defects.

Failure to Warn Claims

Inadequate disclosure of AI limitations, addiction risks, or potential for psychological harm supports failure-to-warn claims—particularly when children are foreseeable users.

Section 230 Won’t Save You
#

AI companies have historically relied on Section 230 of the Communications Decency Act, which shields platforms from liability for user-generated content. The Garcia ruling undermines this defense:

AI Creates Content—It Doesn’t Just Host It

Section 230 protects intermediaries from liability for content created by third parties. But AI chatbots generate their own responses—they’re “information content providers,” not mere intermediaries.

Product Liability Isn’t About “Speech”

When plaintiffs allege design defects rather than defamation or content-based harms, Section 230 is irrelevant. The claim isn’t about what the AI said—it’s about how the product was designed.

Terms of Service Won’t Protect You Either
#

AI companies typically require users to accept terms of service that disclaim liability and mandate arbitration. The Garcia ruling suggests these may be ineffective:

  • Design defects can’t be disclaimed: You can’t contract away liability for unreasonably dangerous products
  • Minors can’t be bound: Terms of service may not bind children who use the platform
  • Federal AI legislation pending: The AI LEAD Act would explicitly prohibit liability waivers for AI harm

What This Means for Victims
#

Easier Path to Compensation
#

Before Garcia, AI injury victims faced nearly impossible legal obstacles. Now:

No Need to Prove Negligence

Under strict liability, you don’t need to show what the company knew or when they knew it. If the product was defective and caused injury, the company is liable.

Multiple Defendants

The Garcia court allowed claims against:

  • The AI company itself
  • Individual founders and executives
  • Companies that invested in or acquired AI technology (like Google)

This means deeper pockets and greater accountability.

Design Focus, Not Content Focus

You don’t need to analyze millions of AI-generated conversations. Focus on the design decisions that made harmful outputs possible.

What Claims You Can Bring
#

Based on Garcia and similar cases, AI injury victims may pursue:

Claim TypeWhat You Must Show
Strict Liability - Design DefectAI system’s design created unreasonable risk of harm
Strict Liability - Manufacturing DefectSpecific release/version deviated from intended design
Strict Liability - Failure to WarnInadequate disclosure of risks to users
NegligenceCompany failed to exercise reasonable care in design/deployment
Negligent Infliction of Emotional DistressCompany’s carelessness caused severe emotional harm
Wrongful DeathAI-caused death through any of above theories
Consumer Protection ViolationsDeceptive marketing or unfair practices

Evidence to Preserve
#

If you or a loved one has been harmed by AI:

  1. Screenshot all AI conversations before accounts are deleted
  2. Export chat histories if the platform allows
  3. Document dates and duration of AI interactions
  4. Record behavior changes correlated with AI use
  5. Obtain medical records documenting psychological harm
  6. Preserve device data that may contain local chat logs

The Broader Legal Landscape#

Other Courts Are Following
#

The Garcia ruling isn’t isolated. Courts across the country are reconsidering AI companies’ liability shields:

OpenAI Lawsuits (California, 2025)

Seven lawsuits filed against OpenAI in California state courts allege ChatGPT caused or contributed to user suicides. These cases will test whether California courts follow Florida’s lead on AI-as-product.

AI LEAD Act (Federal, Pending)

Congress is considering legislation that would explicitly classify AI systems as “covered products” subject to federal product liability standards—codifying the Garcia ruling nationally.

State AI Safety Laws

  • California SB 243: Creates private right of action for AI chatbot harm (effective January 2026)
  • California AB 316: Bars “AI did it autonomously” defense (effective January 2026)
  • New York S-3008C: Requires chatbot suicide detection protocols

What Comes Next
#

The Garcia case is proceeding to discovery. Defendants filed their answer in September 2025, and the case will likely produce:

  • Internal documents about AI safety concerns
  • Training data decisions and known risks
  • Deployment timelines showing rushed releases
  • User harm data the company possessed

This discovery will inform future cases and potentially establish industry-wide design standards.


Frequently Asked Questions
#


Related Resources#

AI Chatbot Liability
#

Legal AI Issues#

Related Industries#

Partner Sites
#


Injured by AI Software?

The legal landscape has shifted. AI companies can no longer hide behind 'software isn't a product' arguments. Connect with attorneys who understand the new strict liability framework for AI harm.

Find Legal Help

Related