California Leads the Nation on AI Accountability#
California has enacted the most comprehensive AI liability and safety legislation in the United States. In 2025 alone, Governor Newsom signed 21 AI-related bills into law—including landmark measures that eliminate the “AI did it” defense in civil lawsuits, require transparency from frontier AI developers, and impose safety protocols on companion chatbots. These laws, effective January 1, 2026, will reshape how robot and AI injury claims are litigated nationwide.
For plaintiffs injured by autonomous systems, surgical robots, AI-powered vehicles, or any technology using artificial intelligence, California’s new legal framework removes key barriers that defendants might otherwise use to evade accountability. For the first time, companies cannot argue that their AI “autonomously” caused harm as a defense against liability.
AB 316: The “AI Did It” Defense Is Dead#
What the Law Does#
California Assembly Bill 316, signed by Governor Newsom on October 13, 2025, eliminates a theoretical but dangerous defense in AI injury cases. The law prohibits defendants who developed, modified, or used artificial intelligence from asserting that the AI “autonomously caused the harm” to escape liability.
The Core Prohibition:
“A defendant who developed, modified, or used artificial intelligence that is alleged to have caused a harm to the plaintiff shall not assert as a defense that the artificial intelligence autonomously caused the harm.”
Why This Matters: Before AB 316, a defendant could potentially argue: “I’m not liable because the AI made an independent decision—I didn’t cause the harm, the algorithm did.” This argument, while not yet successfully used in California courts, posed a significant threat to plaintiffs in emerging AI litigation. AB 316 eliminates it entirely.
Who the Law Applies To#
AB 316 applies broadly to any defendant who:
- Developed the AI system
- Modified the AI system
- Used the AI system
This covers:
- AI developers and tech companies
- Manufacturers integrating AI into products
- Businesses deploying AI in operations
- Hospitals using AI diagnostic tools
- Employers using AI for hiring or management
- Individuals using AI in ways that cause harm
Not Just Businesses
What Defenses Remain#
AB 316 does not eliminate all defenses. Defendants can still argue:
| Defense | Availability |
|---|---|
| Causation | “The AI didn’t cause the harm” |
| Foreseeability | “This outcome wasn’t foreseeable” |
| Comparative fault | “The plaintiff contributed to the harm” |
| Reasonable care | “We exercised ordinary care in AI development/use” |
| Design was not defective | Traditional product liability defenses |
| Adequate warnings | “We provided sufficient warnings” |
What’s Eliminated: The specific defense that AI’s autonomous nature absolves human responsibility.
How This Affects Robot and AI Injury Cases#
Surgical Robot Injuries: A hospital using a da Vinci system that malfunctions cannot argue “the robot acted autonomously.” The hospital and Intuitive Surgical remain liable under existing product liability and malpractice theories.
Autonomous Vehicle Crashes: Tesla cannot argue that Autopilot or Full Self-Driving “autonomously decided” to crash. The company remains liable for design defects, failure to warn, and negligent marketing.
AI Hiring Discrimination: An employer using AI screening tools cannot claim the algorithm “independently” discriminated. The employer remains liable under existing anti-discrimination laws.
Warehouse Robot Injuries: Amazon cannot argue that a Proteus or Hercules robot “autonomously” struck a worker. Traditional product liability and premises liability apply without the AI autonomy defense.
Hypothetical: Surgical Robot Burns
Before AB 316, a surgical robot manufacturer could theoretically argue: 'Our robot uses AI to make real-time adjustments during surgery. When it caused thermal burns, that was an autonomous AI decision—not our design.' AB 316 eliminates this defense. The manufacturer remains liable under product liability law.
Hypothetical: Autonomous Delivery Robot
A sidewalk delivery robot strikes an elderly pedestrian. Before AB 316, the operator could argue: 'The robot's AI navigated autonomously—we didn't control that decision.' AB 316 eliminates this defense. The operator, manufacturer, and deployer remain accountable.
SB 53: Transparency in Frontier AI Act#
What the Law Requires#
California’s SB 53—the Transparency in Frontier Artificial Intelligence Act—is the first law in the United States to regulate “frontier” AI models. Signed September 29, 2025, it requires the largest AI developers to:
Core Requirements:
- Safety Protocols — Draft and implement protocols to manage and mitigate catastrophic risk
- Transparency Reports — Publish detailed disclosures about frontier models
- Incident Reporting — Report critical safety incidents to California regulators
- Whistleblower Protections — Protect employees who report safety concerns
Who Must Comply#
SB 53 applies to “large frontier developers”—defined as:
- Entities that have trained a “frontier model”
- Annual gross revenue exceeding $500 million
Frontier Model Definition: A foundation model trained using computing power greater than 10²⁶ integer or floating-point operations (10²⁶ FLOP).
Major Companies Affected:
- OpenAI
- Anthropic
- Google/Alphabet
- Meta
- Microsoft
- Amazon (AWS)
- Other large AI developers
Enforcement and Penalties#
| Violation | Penalty |
|---|---|
| Non-compliance | Up to $1,000,000 per violation |
| Enforcement | California Attorney General |
| Effective Date | January 1, 2026 |
What This Means for Injury Claims#
Evidence for Plaintiffs: SB 53’s transparency requirements create discoverable documentation. If a frontier AI model causes injury, plaintiffs can seek:
- Published transparency reports
- Safety protocols (or evidence they were inadequate)
- Incident reports filed with regulators
- Internal safety communications (protected whistleblower statements)
Establishing Knowledge: If a company’s transparency report acknowledged known risks that later caused injury, this establishes knowledge for failure-to-warn claims.
Negligence Per Se: Violation of SB 53’s safety protocol requirements may constitute negligence per se—automatic proof of negligence if the violation caused plaintiff’s injury.
SB 243: Companion Chatbot Safety#
Requirements for AI Companions#
California’s SB 243, signed October 13, 2025, makes California the first state to require specific safety protocols for “companion chatbots”—AI systems designed for emotional or social interaction.
Key Requirements:
Prohibited Conversations — Platforms must implement protocols preventing chatbots from engaging in discussions about:
- Suicidal ideation
- Self-harm
- Sexually explicit content with minors
Recurring Alerts — Users must receive reminders that they’re interacting with AI:
- Every 3 hours for minor users
- Periodic reminders for adult users
Parental Controls — Enhanced tools for parents to monitor minors’ chatbot interactions
Connection to Existing Litigation#
SB 243 directly responds to tragedies like those alleged in Character.AI and OpenAI lawsuits—cases claiming AI chatbots encouraged suicidal behavior in minors. The law creates:
Statutory Duties: Companion chatbot operators now have specific statutory duties. Breach of these duties can support:
- Negligence claims
- Negligence per se
- Product liability (failure to include required safety features)
Evidence of Industry Standard: The law establishes a baseline standard of care. Companies not implementing required protocols fall below industry standards.
Other Key California AI Laws (2025)#
Healthcare AI Restrictions#
AB 489 — Prohibits AI developers and deployers from:
- Using terms falsely indicating possession of a healthcare license
- Implying that AI advice comes from a licensed healthcare professional
Impact: Healthcare AI systems cannot present as “doctors” or create the impression a licensed professional is providing care when it’s actually AI.
Deepfake Liability Expansion#
AB 621 — Extends existing deepfake liability to include:
- Persons providing services enabling deepfake pornography operations
- Platform operators facilitating nonconsensual intimate imagery
AI Transparency in Content#
AB 853 — Updates 2024 law requiring:
- Generative AI developers to include provenance data in content
- Compliance deadline extended to August 2, 2026
Algorithmic Price Fixing#
AB 325 (Preventing Algorithmic Price Fixing Act) — Prohibits:
- Pricing algorithms using data from multiple businesses in similar markets
- AI-driven price coordination that would violate antitrust laws
California’s Existing Product Liability Framework#
How Traditional Law Applies to AI#
California’s robust product liability framework applies fully to AI and robotic systems. AB 316 and related laws enhance—not replace—these existing protections.
Strict Liability for Defects:
| Defect Type | Application to AI |
|---|---|
| Manufacturing Defect | Specific robot/AI unit differs from intended design |
| Design Defect | AI system inherently dangerous as designed |
| Failure to Warn | Inadequate disclosure of AI limitations and risks |
California’s Consumer Expectations Test: Unlike states using only risk-utility analysis, California allows plaintiffs to prove design defect by showing the product “failed to perform as safely as an ordinary consumer would expect.” This is powerful for AI claims where consumers expect systems to work as marketed.
Pure Comparative Fault: California follows pure comparative negligence—plaintiffs recover damages regardless of their own fault percentage (reduced by that percentage). Even if 90% at fault, you recover 10% of damages.
Statutes of Limitations#
| Claim Type | Deadline |
|---|---|
| Personal Injury | 2 years from injury |
| Wrongful Death | 2 years from death |
| Product Liability | 2 years from injury |
| Property Damage | 3 years from damage |
Practical Implications for Injury Victims#
Strengthened Claims Under New Laws#
Before AB 316: Defendants could theoretically argue AI autonomy as a defense. Plaintiffs faced uncertainty about how courts would treat this novel argument.
After AB 316: The defense is eliminated by statute. Courts must reject any “AI did it” argument.
Evidence Preservation: New transparency requirements under SB 53 create documentation that plaintiffs can discover:
- Safety protocols (was the company following them?)
- Incident reports (were similar incidents known?)
- Whistleblower communications (did employees warn of problems?)
What to Do If Injured by AI in California#
- Document the AI System — Model, manufacturer, version if known
- Preserve Evidence — Photos, videos, the device if possible
- Seek Medical Attention — Document injuries thoroughly
- Report the Incident — To employer (workplace), manufacturer, relevant agencies
- Consult an Attorney — Before speaking with defendant’s representatives
- Act Within 2 Years — California’s statute of limitations
California-Specific Advantages#
For Plaintiffs:
- No caps on compensatory damages (except medical malpractice)
- Consumer expectations test for design defects
- Pure comparative fault (recover even if partially at fault)
- Strong product liability precedents
- New AI-specific statutory protections
Evidence Available:
- SB 53 transparency reports (for frontier AI)
- Required safety protocols and incident reports
- DMV autonomous vehicle collision reports
- Cal/OSHA workplace investigation records
Frequently Asked Questions#
Related Resources#
- AI Legislation & Regulation — Federal and state AI law tracker
- Da Vinci Robotic Surgery Injuries — Surgical robot complications
- Autonomous Vehicle Accident Claims — Self-driving car liability
- AI Facial Recognition Wrongful Arrests — AI bias in policing
- Understanding Liability — Product liability and negligence frameworks
Injured by AI or Robotics in California?
California's new AI liability laws—including AB 316 eliminating the 'AI did it' defense—strengthen your claims against AI developers, manufacturers, and deployers. Connect with attorneys experienced in California product liability and emerging AI litigation to understand your options.
Get Free Consultation