Skip to main content
  1. Resources/

AI Healthcare Claim Denial Litigation: Complete Legal Guide

Table of Contents

AI Claim Denial Litigation: The Cases Reshaping Healthcare
#

Health insurers are facing an unprecedented wave of litigation over their use of artificial intelligence to deny healthcare claims. Class action lawsuits against UnitedHealth, Cigna, and Humana allege these companies deployed AI algorithms to systematically reject medically necessary care—overriding physician recommendations with automated systems that have documented error rates exceeding 90%.

The stakes are enormous. These cases will determine whether insurance companies can use AI as a weapon against their own policyholders, and whether the legal system can hold them accountable when algorithms replace human judgment in life-or-death healthcare decisions.

90%+
Reversal Rate
nH Predict denials on appeal
Aug 2025
Humana Fraud Ruling
Class action proceeds
Feb 2025
UnitedHealth Ruling
Exhaustion waived
19 States
Considering Laws
Following CA SB 1120

The Three Major Class Actions
#

Estate of Lokken v. UnitedHealth Group
#

The landmark case establishing AI claim denial accountability.

AI Medicare Advantage Denials

Estate of Lokken v. UnitedHealth

Class Action
Proceeding (Feb 2025)

Families sue over nH Predict algorithm with alleged 90% error rate. Gene Lokken, 91, paid $150,000 for care after UnitedHealth denied coverage despite his physicians' recommendations. Court waived exhaustion requirement, allowed breach of contract and good faith claims to proceed. Discovery deadline: December 22, 2025.

D. Minnesota 2023-Present

Case Details:

  • Court: U.S. District Court, District of Minnesota
  • Case No.: 0:23-cv-03514-JRT-SGE
  • Judge: John R. Tunheim
  • Filed: November 14, 2023
  • Discovery Deadline: December 22, 2025

February 2025 Ruling:

Judge Tunheim issued a landmark decision allowing the case to proceed:

RulingSignificance
Exhaustion waivedPlaintiffs need not exhaust Medicare appeals first
Contract claims surviveBreach of contract theory moves forward
Good faith claims surviveImplied covenant claims proceed
Futility findingCourt found administrative appeals would be futile
Irreparable harmFound plaintiffs suffered irreparable injury

September 2025 Update:

The court rejected UnitedHealth’s attempt to narrow discovery scope, allowing plaintiffs broad access to internal documents about how nH Predict operates, how denial decisions are made, and how appeals are processed.

The Gene Lokken Story:

In May 2022, Gene Lokken, 91, fractured his leg and ankle. After a month in a nursing home waiting for his injuries to heal, his doctor approved physical therapy. UnitedHealth/NaviHealth paid for only 19 days of therapy before declaring Lokken safe to go home—despite his doctors and therapists appealing the denials, noting his muscles were “paralyzed and weak.”

To keep receiving care, Lokken’s family paid approximately $150,000 over the next year. He died in July 2023.

Why the Exhaustion Waiver Matters

The court’s decision to waive the Medicare Act’s exhaustion requirement is a breakthrough for plaintiffs. Previously, patients had to complete Medicare’s lengthy administrative appeals before suing—a process designed to delay and discourage litigation. Judge Tunheim found this requirement “futile” given UnitedHealth’s systematic denial practices, opening the courthouse doors for thousands of affected patients.

Barrows v. Humana Inc.
#

The fraud theory case that could change everything.

AI Medicare Advantage Fraud

Barrows v. Humana Inc.

Class Action
Proceeding (Aug 2025)

Medicare Advantage beneficiaries allege Humana committed fraud by using nH Predict AI to deny post-acute care while claiming human clinical review. August 2025: Court rules claims proceed without exhaustion; central question is whether Humana 'knowingly violated contractual obligations' while retaining premiums.

W.D. Kentucky 2023-Present

Case Details:

  • Court: U.S. District Court, Western District of Kentucky
  • Case No.: 3:23-cv-00654
  • Judge: Rebecca Grady Jennings
  • Status: Proceeding after August 2025 ruling

August 2025 Ruling:

Judge Jennings delivered a pivotal decision, framing the case around fraud—not merely coverage disputes:

“The central question isn’t whether Humana violated the Medicare Act in denying benefits, but whether the company knowingly violated its contractual obligations to beneficiaries while retaining premium proceeds.”

Key Legal Theory:

The plaintiffs argue they would have chosen different insurers had they known Humana was delegating legally required medical reviews to artificial intelligence. This fraud theory opens the door to:

  • Actual damages
  • Statutory damages
  • Punitive damages
  • Emotional distress compensation
  • Restitution
  • Injunctive relief prohibiting AI-enabled claim handling

Allegations Against Humana:

AllegationEvidence Cited
14-day cutoffPatients rarely stay more than 14 days before denials begin
100-day entitlement ignoredMedicare allows up to 100 days post-acute care
0.2% appeal rate exploitedHumana knows few patients appeal
Fraudulent misrepresentationPromised individual assessment, delivered AI automation

Humana’s Defense:

Humana claims it uses “augmented intelligence” with “human in the loop” decision-making, and that coverage decisions are based on patient needs, physician judgment, and CMS guidelines.


Kisting-Leung v. Cigna Corp.
#

The batch-denial case exposing industrial-scale claim rejection.

AI PXDX Algorithm Denials

Kisting-Leung v. Cigna Corp.

Class Action
Proceeding (Mar 2025)

Plaintiffs allege Cigna's PXDX algorithm auto-denied 300,000+ claims in two months. Doctors allegedly spent 1.2 seconds per claim, signing off on batch denials of 50 at once. Court found delegation to algorithm may violate plan terms and fiduciary duties.

E.D. California 2023-Present

Case Details:

  • Court: U.S. District Court, Eastern District of California
  • Judge: Dale Drozd
  • Status: Proceeding after March 2025 ruling

The PXDX System:

Cigna’s PXDX algorithm matches diagnosis codes to pre-approved procedures. When claims don’t match the algorithm’s expectations, they’re automatically flagged for denial.

AllegationScale
Claims auto-denied300,000+ in two months (2022)
Review time per claimAverage 1.2 seconds
Batch denialsDoctors signed off on 50 claims at once
Reversal rate80%+ overturned on appeal

March 2025 Ruling:

Judge Drozd allowed the class action to proceed, finding:

  • Plaintiffs adequately alleged Cigna violated plan terms
  • Delegating decisions to automated algorithm may breach fiduciary duties
  • Cigna’s interpretation of its discretion was “abuse of discretion”

Cigna’s Defense:

Cigna claims PXDX “does not use AI” and is “simple sorting technology” that matches codes, similar to CMS systems. The insurer says it’s only used for “low-cost tests and procedures.”


How AI Denial Systems Work
#

The nH Predict Algorithm
#

Developed by NaviHealth (owned by UnitedHealth’s Optum subsidiary), nH Predict is the most litigated healthcare AI system.

How It Works:

  1. Patient admitted to post-acute care (skilled nursing, rehab, home health)
  2. nH Predict compares patient to database of “similar” patients
  3. Algorithm predicts “expected” length of stay
  4. Insurer denies coverage when actual stay exceeds prediction
  5. Denial occurs regardless of treating physician’s recommendation

Why It’s Controversial:

IssueEvidence
Override physician judgmentDoctors’ orders routinely overridden
Rigid criteriaNo individual patient consideration
Employee pressureWorkers disciplined/terminated for deviating
Known inaccuracy90%+ reversal rate on appeal
Profit motiveOnly 0.2% of patients appeal

The PXDX Algorithm (Cigna)
#

PXDX is Cigna’s diagnosis-procedure matching system.

How It Works:

  1. Claim submitted with diagnosis and procedure codes
  2. PXDX matches codes against pre-approved combinations
  3. Non-matching claims flagged for denial
  4. Doctors batch-approve denials (allegedly 1.2 seconds per claim)
  5. Patient receives denial letter

Why It’s Controversial:

  • No individual review — Claims denied based solely on code matching
  • Batch processing — Doctors sign off on 50 denials at once
  • Speed over accuracy — 1.2-second average review time
  • High error rate — 80%+ reversed on appeal

The Business Model of Wrongful Denials

Insurance companies know their AI systems are wrong most of the time—but they also know only 0.2% of patients appeal. Even with 90% reversal rates, insurers save money on every unchallenged denial. The Senate investigation found UnitedHealth’s denial rate jumped from 10.9% to 22.7% after implementing nH Predict—a deliberate strategy to profit from patient exhaustion and confusion.

Legal Theories in AI Denial Litigation#

ERISA Claims (Employer-Sponsored Plans)
#

Most employer-sponsored health plans are governed by ERISA, which provides specific avenues for relief:

Breach of Fiduciary Duty:

  • Plan administrators must act in beneficiaries’ interest
  • Delegating to AI that maximizes denials may breach this duty
  • Using AI known to be 90% inaccurate violates fiduciary standards

Failure to Follow Plan Terms:

  • If plan says “clinical staff” make decisions, AI violates terms
  • Breach of contract for not following policy language
  • Courts have found this theory can survive dismissal

ERISA Limitations:

  • Damages generally limited to denied benefits
  • No punitive damages under ERISA
  • But attorney’s fees available if plaintiff wins

State Law Claims (Individual/Marketplace Plans)
#

Non-ERISA plans allow broader legal theories:

Bad Faith Insurance:

  • Unreasonable denial of valid claims
  • Using AI known to be wrong 90% of the time
  • Failure to investigate claims properly
  • Punitive damages potentially available

Fraud:

  • Misrepresenting that humans review claims
  • Retaining premiums while denying coverage
  • Concealing AI’s role in decision-making
  • Damages include emotional distress, punitive damages

Consumer Protection:

  • State unfair/deceptive practices acts
  • California requires “thorough, fair and objective” investigation
  • Batch AI denials violate these standards

Breach of Contract:

  • Policy promises coverage for medically necessary care
  • AI denials without individual review breach terms
  • Good faith and fair dealing violations

Medicare Advantage Specific Claims
#

Administrative Exhaustion:

  • Traditionally required before federal lawsuit
  • Courts now waiving when “futile” or “irreparable harm”
  • Lokken and Barrows cases establish this precedent

CMS Regulatory Violations:

  • February 2024 CMS guidance: AI cannot solely dictate coverage
  • Insurers must make individualized determinations
  • Violations strengthen plaintiff claims

Evidence and Discovery in AI Denial Cases
#

Critical Evidence Categories
#

Evidence TypeWhat It ShowsHow to Obtain
Algorithm design docsHow AI makes decisionsDiscovery demand
Training dataWhat biases existExpert analysis
Denial rate statisticsSystematic patternsCMS data, discovery
Appeal reversal dataAI error ratesInternal records
Employee communicationsPressure to follow AIEmail discovery
Termination recordsRetaliation for deviationPersonnel files
Audit logsIndividual claim handlingSystem records

September 2025 Discovery Ruling
#

The UnitedHealth court’s rejection of discovery limitations is a major plaintiff victory:

  • Broad scope preserved — Plaintiffs can investigate how nH Predict operates
  • Internal documents accessible — Emails, memos, training materials
  • Appeal processing exposed — How insurers handle challenges
  • Pattern evidence — Systematic denial practices revealed

Key Discovery Requests
#

Effective AI denial litigation requires demanding:

  1. Algorithm documentation — Code, logic rules, training data
  2. Validation studies — Internal accuracy assessments
  3. Override procedures — How clinicians can deviate from AI
  4. Performance metrics — Denial rates, appeal rates, reversal rates
  5. Employee policies — Pressure to follow AI recommendations
  6. Communication records — Emails discussing AI accuracy/concerns

Data Preservation Is Critical

AI systems generate extensive logs that insurers may delete. If you’re considering litigation, send a preservation demand immediately requiring the insurer to retain all data related to your claim processing, including algorithm inputs, outputs, and any human review records.

How AI Denial Cases Differ from Ordinary Claims
#

Systemic vs. Individual Harm
#

Traditional claim denial cases involve individual coverage disputes. AI denial cases demonstrate systemic patterns affecting millions:

FactorTraditional DenialAI Denial
ScaleIndividual claimMillions of claims
ReviewHuman considerationAutomated batch processing
Appeal rateVariesExploits 0.2% appeal rate
Error rateCase-specific80-90% systematic
IntentPossible mistakeDeliberate profit strategy

Speed and Volume
#

AI enables denials at unprecedented scale:

  • 300,000+ claims denied in two months (Cigna)
  • 1.2 seconds average review time
  • Batch processing of 50 denials at once
  • No meaningful human review before denial

Disparate Impact Potential
#

AI denial systems may disproportionately affect:

  • Elderly patients — More complex conditions, longer recovery needs
  • Patients with disabilities — Care needs that don’t fit algorithmic patterns
  • Racial minorities — Training data may encode historical disparities
  • Low-income patients — Less likely to appeal, more vulnerable to denials

Corporate Knowledge of Harm
#

Unlike ordinary denials, evidence shows insurers knew their AI was wrong:

  • 90%+ reversal rate documented internally
  • Denial rates increased after AI deployment
  • Employees disciplined for overriding AI
  • Profit motive from low appeal rates

California SB 1120: The National Model
#

The “Physicians Make Decisions Act”
#

Effective: January 1, 2025 Sponsor: Senator Josh Becker (D-Menlo Park)

California became the first state to prohibit health insurers from using AI as the sole basis for denying claims.

Key Requirements
#

RequirementDetails
Human review mandatoryLicensed healthcare providers must make medical necessity determinations
AI as tool onlyAI may inform but cannot independently decide
Standard casesDecision within 5 business days
Urgent casesDecision within 72 hours
Retrospective reviewDecision within 30 days
Non-discriminationAI cannot discriminate against enrollees
TransparencyInsurers must disclose AI use in reviews

Enforcement
#

The California Department of Managed Health Care oversees enforcement:

  • Audit authority — Regulators can audit denial rates
  • Penalty discretion — Fines for violations, missed deadlines
  • Compliance monitoring — Ongoing oversight of AI use

California Attorney General Warning (January 2025)
#

In January 2025, the California Attorney General issued guidance warning insurers about AI compliance, signaling aggressive enforcement intentions.

National Expansion
#

19 states are now considering similar legislation, according to Senator Becker. Congressional offices have also been contacted about federal legislation.

States with pending or proposed AI denial laws:

  • New York — AI transparency requirements
  • Pennsylvania — Algorithmic denial study
  • Illinois — AI disclosure requirements
  • Georgia — Medical necessity determination rules
  • New Jersey — Proposed AI review standards

CMS Regulatory Response
#

February 2024 Guidance
#

The Centers for Medicare and Medicaid Services clarified:

What AI Can Do:

  • Assist in predicting patient needs
  • Help estimate expected care duration
  • Support clinical decision-making

What AI Cannot Do:

  • Solely dictate coverage decisions
  • Override individualized medical assessments
  • Replace physician clinical judgment
  • Serve as automatic denial mechanisms

Litigation Implications
#

CMS guidance strengthens plaintiff arguments that:

  • AI-only denials violate Medicare rules
  • Insurers must consider individual circumstances
  • Algorithmic predictions aren’t coverage determinations
  • Non-compliance may constitute regulatory violations

Frequently Asked Questions
#


Related Practice Areas#

Related Resources#

Related Locations#

  • Minneapolis — UnitedHealth headquarters, nH Predict litigation
  • California — SB 1120, Cigna PXDX litigation
  • Kentucky — Humana headquarters, Barrows litigation

Fighting an AI Claim Denial?

Major class actions against UnitedHealth, Cigna, and Humana are establishing that insurers can be held accountable for AI-driven claim denials. Courts are waiving exhaustion requirements and allowing fraud claims to proceed. If your healthcare claim was wrongfully denied—especially if you suspect algorithmic processing—connect with attorneys experienced in AI liability and insurance bad faith litigation.

Get Free Consultation

Related