AI Claim Denial Litigation: The Cases Reshaping Healthcare#
Health insurers are facing an unprecedented wave of litigation over their use of artificial intelligence to deny healthcare claims. Class action lawsuits against UnitedHealth, Cigna, and Humana allege these companies deployed AI algorithms to systematically reject medically necessary care—overriding physician recommendations with automated systems that have documented error rates exceeding 90%.
The stakes are enormous. These cases will determine whether insurance companies can use AI as a weapon against their own policyholders, and whether the legal system can hold them accountable when algorithms replace human judgment in life-or-death healthcare decisions.
The Three Major Class Actions#
Estate of Lokken v. UnitedHealth Group#
The landmark case establishing AI claim denial accountability.
Estate of Lokken v. UnitedHealth
Families sue over nH Predict algorithm with alleged 90% error rate. Gene Lokken, 91, paid $150,000 for care after UnitedHealth denied coverage despite his physicians' recommendations. Court waived exhaustion requirement, allowed breach of contract and good faith claims to proceed. Discovery deadline: December 22, 2025.
Case Details:
- Court: U.S. District Court, District of Minnesota
- Case No.: 0:23-cv-03514-JRT-SGE
- Judge: John R. Tunheim
- Filed: November 14, 2023
- Discovery Deadline: December 22, 2025
February 2025 Ruling:
Judge Tunheim issued a landmark decision allowing the case to proceed:
| Ruling | Significance |
|---|---|
| Exhaustion waived | Plaintiffs need not exhaust Medicare appeals first |
| Contract claims survive | Breach of contract theory moves forward |
| Good faith claims survive | Implied covenant claims proceed |
| Futility finding | Court found administrative appeals would be futile |
| Irreparable harm | Found plaintiffs suffered irreparable injury |
September 2025 Update:
The court rejected UnitedHealth’s attempt to narrow discovery scope, allowing plaintiffs broad access to internal documents about how nH Predict operates, how denial decisions are made, and how appeals are processed.
The Gene Lokken Story:
In May 2022, Gene Lokken, 91, fractured his leg and ankle. After a month in a nursing home waiting for his injuries to heal, his doctor approved physical therapy. UnitedHealth/NaviHealth paid for only 19 days of therapy before declaring Lokken safe to go home—despite his doctors and therapists appealing the denials, noting his muscles were “paralyzed and weak.”
To keep receiving care, Lokken’s family paid approximately $150,000 over the next year. He died in July 2023.
Why the Exhaustion Waiver Matters
Barrows v. Humana Inc.#
The fraud theory case that could change everything.
Barrows v. Humana Inc.
Medicare Advantage beneficiaries allege Humana committed fraud by using nH Predict AI to deny post-acute care while claiming human clinical review. August 2025: Court rules claims proceed without exhaustion; central question is whether Humana 'knowingly violated contractual obligations' while retaining premiums.
Case Details:
- Court: U.S. District Court, Western District of Kentucky
- Case No.: 3:23-cv-00654
- Judge: Rebecca Grady Jennings
- Status: Proceeding after August 2025 ruling
August 2025 Ruling:
Judge Jennings delivered a pivotal decision, framing the case around fraud—not merely coverage disputes:
“The central question isn’t whether Humana violated the Medicare Act in denying benefits, but whether the company knowingly violated its contractual obligations to beneficiaries while retaining premium proceeds.”
Key Legal Theory:
The plaintiffs argue they would have chosen different insurers had they known Humana was delegating legally required medical reviews to artificial intelligence. This fraud theory opens the door to:
- Actual damages
- Statutory damages
- Punitive damages
- Emotional distress compensation
- Restitution
- Injunctive relief prohibiting AI-enabled claim handling
Allegations Against Humana:
| Allegation | Evidence Cited |
|---|---|
| 14-day cutoff | Patients rarely stay more than 14 days before denials begin |
| 100-day entitlement ignored | Medicare allows up to 100 days post-acute care |
| 0.2% appeal rate exploited | Humana knows few patients appeal |
| Fraudulent misrepresentation | Promised individual assessment, delivered AI automation |
Humana’s Defense:
Humana claims it uses “augmented intelligence” with “human in the loop” decision-making, and that coverage decisions are based on patient needs, physician judgment, and CMS guidelines.
Kisting-Leung v. Cigna Corp.#
The batch-denial case exposing industrial-scale claim rejection.
Kisting-Leung v. Cigna Corp.
Plaintiffs allege Cigna's PXDX algorithm auto-denied 300,000+ claims in two months. Doctors allegedly spent 1.2 seconds per claim, signing off on batch denials of 50 at once. Court found delegation to algorithm may violate plan terms and fiduciary duties.
Case Details:
- Court: U.S. District Court, Eastern District of California
- Judge: Dale Drozd
- Status: Proceeding after March 2025 ruling
The PXDX System:
Cigna’s PXDX algorithm matches diagnosis codes to pre-approved procedures. When claims don’t match the algorithm’s expectations, they’re automatically flagged for denial.
| Allegation | Scale |
|---|---|
| Claims auto-denied | 300,000+ in two months (2022) |
| Review time per claim | Average 1.2 seconds |
| Batch denials | Doctors signed off on 50 claims at once |
| Reversal rate | 80%+ overturned on appeal |
March 2025 Ruling:
Judge Drozd allowed the class action to proceed, finding:
- Plaintiffs adequately alleged Cigna violated plan terms
- Delegating decisions to automated algorithm may breach fiduciary duties
- Cigna’s interpretation of its discretion was “abuse of discretion”
Cigna’s Defense:
Cigna claims PXDX “does not use AI” and is “simple sorting technology” that matches codes, similar to CMS systems. The insurer says it’s only used for “low-cost tests and procedures.”
How AI Denial Systems Work#
The nH Predict Algorithm#
Developed by NaviHealth (owned by UnitedHealth’s Optum subsidiary), nH Predict is the most litigated healthcare AI system.
How It Works:
- Patient admitted to post-acute care (skilled nursing, rehab, home health)
- nH Predict compares patient to database of “similar” patients
- Algorithm predicts “expected” length of stay
- Insurer denies coverage when actual stay exceeds prediction
- Denial occurs regardless of treating physician’s recommendation
Why It’s Controversial:
| Issue | Evidence |
|---|---|
| Override physician judgment | Doctors’ orders routinely overridden |
| Rigid criteria | No individual patient consideration |
| Employee pressure | Workers disciplined/terminated for deviating |
| Known inaccuracy | 90%+ reversal rate on appeal |
| Profit motive | Only 0.2% of patients appeal |
The PXDX Algorithm (Cigna)#
PXDX is Cigna’s diagnosis-procedure matching system.
How It Works:
- Claim submitted with diagnosis and procedure codes
- PXDX matches codes against pre-approved combinations
- Non-matching claims flagged for denial
- Doctors batch-approve denials (allegedly 1.2 seconds per claim)
- Patient receives denial letter
Why It’s Controversial:
- No individual review — Claims denied based solely on code matching
- Batch processing — Doctors sign off on 50 denials at once
- Speed over accuracy — 1.2-second average review time
- High error rate — 80%+ reversed on appeal
The Business Model of Wrongful Denials
Legal Theories in AI Denial Litigation#
ERISA Claims (Employer-Sponsored Plans)#
Most employer-sponsored health plans are governed by ERISA, which provides specific avenues for relief:
Breach of Fiduciary Duty:
- Plan administrators must act in beneficiaries’ interest
- Delegating to AI that maximizes denials may breach this duty
- Using AI known to be 90% inaccurate violates fiduciary standards
Failure to Follow Plan Terms:
- If plan says “clinical staff” make decisions, AI violates terms
- Breach of contract for not following policy language
- Courts have found this theory can survive dismissal
ERISA Limitations:
- Damages generally limited to denied benefits
- No punitive damages under ERISA
- But attorney’s fees available if plaintiff wins
State Law Claims (Individual/Marketplace Plans)#
Non-ERISA plans allow broader legal theories:
Bad Faith Insurance:
- Unreasonable denial of valid claims
- Using AI known to be wrong 90% of the time
- Failure to investigate claims properly
- Punitive damages potentially available
Fraud:
- Misrepresenting that humans review claims
- Retaining premiums while denying coverage
- Concealing AI’s role in decision-making
- Damages include emotional distress, punitive damages
Consumer Protection:
- State unfair/deceptive practices acts
- California requires “thorough, fair and objective” investigation
- Batch AI denials violate these standards
Breach of Contract:
- Policy promises coverage for medically necessary care
- AI denials without individual review breach terms
- Good faith and fair dealing violations
Medicare Advantage Specific Claims#
Administrative Exhaustion:
- Traditionally required before federal lawsuit
- Courts now waiving when “futile” or “irreparable harm”
- Lokken and Barrows cases establish this precedent
CMS Regulatory Violations:
- February 2024 CMS guidance: AI cannot solely dictate coverage
- Insurers must make individualized determinations
- Violations strengthen plaintiff claims
Evidence and Discovery in AI Denial Cases#
Critical Evidence Categories#
| Evidence Type | What It Shows | How to Obtain |
|---|---|---|
| Algorithm design docs | How AI makes decisions | Discovery demand |
| Training data | What biases exist | Expert analysis |
| Denial rate statistics | Systematic patterns | CMS data, discovery |
| Appeal reversal data | AI error rates | Internal records |
| Employee communications | Pressure to follow AI | Email discovery |
| Termination records | Retaliation for deviation | Personnel files |
| Audit logs | Individual claim handling | System records |
September 2025 Discovery Ruling#
The UnitedHealth court’s rejection of discovery limitations is a major plaintiff victory:
- Broad scope preserved — Plaintiffs can investigate how nH Predict operates
- Internal documents accessible — Emails, memos, training materials
- Appeal processing exposed — How insurers handle challenges
- Pattern evidence — Systematic denial practices revealed
Key Discovery Requests#
Effective AI denial litigation requires demanding:
- Algorithm documentation — Code, logic rules, training data
- Validation studies — Internal accuracy assessments
- Override procedures — How clinicians can deviate from AI
- Performance metrics — Denial rates, appeal rates, reversal rates
- Employee policies — Pressure to follow AI recommendations
- Communication records — Emails discussing AI accuracy/concerns
Data Preservation Is Critical
How AI Denial Cases Differ from Ordinary Claims#
Systemic vs. Individual Harm#
Traditional claim denial cases involve individual coverage disputes. AI denial cases demonstrate systemic patterns affecting millions:
| Factor | Traditional Denial | AI Denial |
|---|---|---|
| Scale | Individual claim | Millions of claims |
| Review | Human consideration | Automated batch processing |
| Appeal rate | Varies | Exploits 0.2% appeal rate |
| Error rate | Case-specific | 80-90% systematic |
| Intent | Possible mistake | Deliberate profit strategy |
Speed and Volume#
AI enables denials at unprecedented scale:
- 300,000+ claims denied in two months (Cigna)
- 1.2 seconds average review time
- Batch processing of 50 denials at once
- No meaningful human review before denial
Disparate Impact Potential#
AI denial systems may disproportionately affect:
- Elderly patients — More complex conditions, longer recovery needs
- Patients with disabilities — Care needs that don’t fit algorithmic patterns
- Racial minorities — Training data may encode historical disparities
- Low-income patients — Less likely to appeal, more vulnerable to denials
Corporate Knowledge of Harm#
Unlike ordinary denials, evidence shows insurers knew their AI was wrong:
- 90%+ reversal rate documented internally
- Denial rates increased after AI deployment
- Employees disciplined for overriding AI
- Profit motive from low appeal rates
California SB 1120: The National Model#
The “Physicians Make Decisions Act”#
Effective: January 1, 2025 Sponsor: Senator Josh Becker (D-Menlo Park)
California became the first state to prohibit health insurers from using AI as the sole basis for denying claims.
Key Requirements#
| Requirement | Details |
|---|---|
| Human review mandatory | Licensed healthcare providers must make medical necessity determinations |
| AI as tool only | AI may inform but cannot independently decide |
| Standard cases | Decision within 5 business days |
| Urgent cases | Decision within 72 hours |
| Retrospective review | Decision within 30 days |
| Non-discrimination | AI cannot discriminate against enrollees |
| Transparency | Insurers must disclose AI use in reviews |
Enforcement#
The California Department of Managed Health Care oversees enforcement:
- Audit authority — Regulators can audit denial rates
- Penalty discretion — Fines for violations, missed deadlines
- Compliance monitoring — Ongoing oversight of AI use
California Attorney General Warning (January 2025)#
In January 2025, the California Attorney General issued guidance warning insurers about AI compliance, signaling aggressive enforcement intentions.
National Expansion#
19 states are now considering similar legislation, according to Senator Becker. Congressional offices have also been contacted about federal legislation.
States with pending or proposed AI denial laws:
- New York — AI transparency requirements
- Pennsylvania — Algorithmic denial study
- Illinois — AI disclosure requirements
- Georgia — Medical necessity determination rules
- New Jersey — Proposed AI review standards
CMS Regulatory Response#
February 2024 Guidance#
The Centers for Medicare and Medicaid Services clarified:
What AI Can Do:
- Assist in predicting patient needs
- Help estimate expected care duration
- Support clinical decision-making
What AI Cannot Do:
- Solely dictate coverage decisions
- Override individualized medical assessments
- Replace physician clinical judgment
- Serve as automatic denial mechanisms
Litigation Implications#
CMS guidance strengthens plaintiff arguments that:
- AI-only denials violate Medicare rules
- Insurers must consider individual circumstances
- Algorithmic predictions aren’t coverage determinations
- Non-compliance may constitute regulatory violations
Frequently Asked Questions#
Related Practice Areas#
- Medical AI — Healthcare AI diagnosis and treatment liability
- AI Chatbots — AI-caused psychological harm
- AI Hiring Discrimination — Algorithmic employment discrimination
Related Resources#
- AI Insurance Claim Denials — General patient guide
- AI Legislation & Regulation — Federal and state AI laws
- Understanding Liability — Legal frameworks
- Filing a Claim — Step-by-step claims process
Related Locations#
- Minneapolis — UnitedHealth headquarters, nH Predict litigation
- California — SB 1120, Cigna PXDX litigation
- Kentucky — Humana headquarters, Barrows litigation
Fighting an AI Claim Denial?
Major class actions against UnitedHealth, Cigna, and Humana are establishing that insurers can be held accountable for AI-driven claim denials. Courts are waiving exhaustion requirements and allowing fraud claims to proceed. If your healthcare claim was wrongfully denied—especially if you suspect algorithmic processing—connect with attorneys experienced in AI liability and insurance bad faith litigation.
Get Free Consultation