Skip to main content
  1. Resources/

AI Insurance Claim Denials: Legal Guide for Patients

Table of Contents

When Algorithms Deny Your Healthcare
#

Health insurance companies are using artificial intelligence to deny claims at an industrial scale—and they’re getting caught. Class action lawsuits against UnitedHealth and Cigna allege these insurers deploy AI algorithms that override physician recommendations and automatically reject medically necessary care, with error rates as high as 90% on appeal.

The human cost is devastating. Elderly patients forced to leave skilled nursing facilities too soon. Families paying out of pocket for care their insurance should cover. Patients who forgo treatment entirely because they can’t afford it. And in the worst cases, patients whose health deteriorates or who die after being denied the care they need.

90%
Appeal Reversal
nH Predict denials overturned
300K+
Auto-Denials
Cigna PxDx (2 months, 2022)
1.2 sec
Review Time
Average per Cigna claim
22.7%
Denial Rate
UnitedHealth post-acute 2022

The AI Denial Landscape
#

How Insurance AI Works
#

Insurance companies use predictive algorithms to determine how long patients should need various types of care. When the algorithm says a patient should be “ready” for discharge—regardless of what their doctor says—the insurer denies coverage for continued care.

The nH Predict System (UnitedHealth/NaviHealth):

  • Predicts “expected” length of stay for post-acute care
  • Used for skilled nursing facilities, rehab, home health
  • Overrides physician clinical judgment
  • Allegedly has 90% error rate (reversed on appeal)

The PxDx System (Cigna):

  • Matches diagnosis codes to pre-approved procedures
  • Auto-denies claims that don’t match
  • Processes claims in batches without individual review
  • Doctors allegedly spend 1.2 seconds per claim

How Denials Work:

  1. Patient receives care ordered by physician
  2. Claim submitted to insurer
  3. AI algorithm reviews claim against database
  4. Algorithm denies claim based on predictive model
  5. Patient receives denial letter
  6. Appeal process (if pursued) often reverses denial

The Scale of the Problem
#

UnitedHealth:

  • Denial rate for post-acute care jumped from 10.9% (2020) to 22.7% (2022)
  • More than 90% of appealed denials overturned
  • Over 80% of preauthorization denials reversed on appeal
  • Yet only 0.2% of policyholders appeal denied claims

Cigna:

  • 300,000+ claims auto-denied in just two months (2022)
  • Doctors reviewing denials average 1.2 seconds per claim
  • “Click and submit” batch approvals of 50 denials at once
  • 80% of appealed denials overturned

Why Insurers Profit from AI Denials

Insurance companies know their AI systems are wrong most of the time—but they also know most patients won’t appeal. With only 0.2% of patients appealing denials, insurers save money on every wrongful denial that goes unchallenged, even if they lose 90% of appeals. The system is designed to exploit patient exhaustion and confusion.

Major Lawsuits
#

Estate of Lokken v. UnitedHealth Group
#

Court: U.S. District Court, District of Minnesota Case No.: 0:23-cv-03514-JRT-SGE Status: Proceeding (breach of contract and good faith claims survive)

Key Allegations:

  • UnitedHealth knowingly used nH Predict despite 90% error rate
  • Algorithm overrides physician medical necessity determinations
  • Patients denied medically necessary post-acute care
  • Denials led to worsening health and deaths

February 2025 Ruling:

  • Judge allowed breach of contract and good faith claims to proceed
  • Found plaintiffs adequately alleged UnitedHealth broke policy terms requiring clinical staff decisions
  • Waived administrative exhaustion requirement due to futility and irreparable harm
  • Denied UnitedHealth’s attempt to limit discovery scope

September 2025 Update: Judge denied UnitedHealth’s request to narrow discovery scope. The court rejected UnitedHealth’s attempt to limit plaintiffs’ investigation into the company’s AI denial practices, allowing broad discovery into how nH Predict operates, how denial decisions are made, and how appeals are processed. This ruling gives plaintiffs access to internal documents that may reveal the full extent of algorithmic denial practices.

AI Claim Denials - Medicare Advantage

Estate of Lokken v. UnitedHealth

Class Action
Proceeding

Families of deceased Medicare Advantage members sue over nH Predict algorithm. Allege 90% error rate, overriding physician decisions, premature care denials. February 2025: Breach of contract and good faith claims survive. September 2025: Discovery scope preserved despite UnitedHealth objections.

D. Minnesota 2023-Present
AI Claim Denials - PxDx Algorithm

Kisting-Leung v. Cigna Corp.

Class Action
Proceeding

Plaintiffs allege Cigna's PxDx algorithm auto-denied 300K+ claims in two months. Doctors spent average 1.2 seconds per claim, signing off on denials in batches of 50. March 2025: Court allowed claims to proceed, finding delegation to algorithm may violate plan terms.

E.D. California 2023-Present

Kisting-Leung v. Cigna Corp.
#

Court: U.S. District Court, Eastern District of California Status: Proceeding after March 2025 ruling

Key Allegations:

  • PxDx algorithm auto-denies claims without individual review
  • Doctors spend 1.2 seconds on average per claim
  • “Click and submit” batch denials of 50 claims at once
  • 300,000+ claims denied in just two months (2022)
  • Violates California law requiring “thorough, fair and objective” investigation

March 2025 Ruling: Judge Dale Drozd allowed class action to proceed, finding:

  • Plaintiffs adequately alleged Cigna violated plan terms
  • Delegating decisions to automated algorithm may breach fiduciary duties
  • Cigna’s interpretation of its discretion was “abuse of discretion”

Cigna’s Defense: Cigna claims PxDx “does not use AI” and is “simple sorting technology” that matches codes, similar to CMS systems. The insurer says it’s only used for “low-cost tests and procedures.”

Humana Litigation
#

Court: U.S. District Court, Western District of Kentucky Status: Proceeding (class action)

Humana was sued in 2023 for allegedly using the same nH Predict algorithm to wrongfully deny Medicare Advantage claims. The case parallels the UnitedHealth litigation, with similar allegations of algorithm overriding physician judgment and high reversal rates on appeal.

Key Allegations:

  • Humana allegedly works with NaviHealth to keep skilled nursing stays within 1% of algorithm predictions
  • Employees who deviate from nH Predict recommendations are “disciplined and terminated”
  • Roughly 2% of Humana policyholders appeal denied claims
  • One plaintiff (Sharon Merkley) received seven denials for the same care within 30 days

2025 Case Status: The court allowed the class action to proceed, waiving the exhaustion requirement because:

  • Repeated denials by Humana risked “irreparable harm” to plaintiffs
  • Patients faced impossible choices: stay and risk paying bills if appeals fail, or forgo care while waiting
  • Judge found exhaustion would be “futile” given the pattern of repeated denials

Humana’s Defense: Humana claims it uses “augmented intelligence” that maintains “human in the loop” decision-making, and that coverage decisions are based on patient needs, physician judgment, and CMS guidelines.

AI Claim Denials - nH Predict

Humana Class Action

Class Action
Proceeding

Medicare Advantage beneficiaries sue Humana over nH Predict algorithm. Allege employees terminated for deviating from AI predictions. One plaintiff received 7 denials for same care in 30 days. Court waived exhaustion requirement due to irreparable harm and futility of appeals process.

W.D. Kentucky 2023-Present

Senate Investigation Findings
#

U.S. Senate Permanent Subcommittee on Investigations
#

Democrats on the Senate subcommittee issued a scathing October 2024 report on UnitedHealth’s AI denial practices:

Key Findings:

Denial Rate Explosion:

  • Post-acute care denial rate jumped from 10.9% (2020) to 22.7% (2022)
  • Coincided with increased reliance on nH Predict algorithm

Algorithm Overriding Doctors:

  • Physicians’ clinical judgments routinely overridden
  • Case managers pressured to follow algorithm recommendations
  • Even when clinicians and families objected

Exploiting Low Appeal Rates:

  • Only 0.2% of policyholders appeal denied claims
  • Insurers profit from every unchallenged wrongful denial
  • System designed to exhaust patients into acceptance

Broader Pattern:

  • Multiple Medicare Advantage plans leveraging AI to reject claims
  • AI denials conflicting with Medicare coverage rules
  • Pattern of prioritizing cost savings over patient care

CMS Regulatory Response
#

February 2024 Guidance
#

The Centers for Medicare and Medicaid Services issued guidance clarifying:

What Algorithms Can Do:

  • Assist in predicting patient needs
  • Help estimate expected care duration
  • Support clinical decision-making

What Algorithms Cannot Do:

  • Solely dictate coverage decisions
  • Override individualized medical assessments
  • Replace physician clinical judgment
  • Serve as automatic denial mechanisms

Enforcement Implications:

  • Insurers using AI as sole basis for denials may violate CMS rules
  • Medicare Advantage plans must make individualized determinations
  • Algorithm predictions cannot substitute for case-by-case review

What This Means for Patients
#

CMS guidance strengthens arguments that:

  • AI-only denials violate Medicare rules
  • Insurers must consider individual circumstances
  • Algorithmic predictions aren’t coverage determinations
  • Appeals citing CMS guidance may be more successful

State Legislation: California Leads the Way
#

California SB 1120 — The “Physicians Make Decisions Act”
#

Effective: January 1, 2025 Sponsor: State Senator Josh Becker (D-Menlo Park)

California became the first state to prohibit health insurers from using AI as the sole basis for denying, delaying, or modifying claims. The landmark legislation sets strict requirements for how insurers can use automated systems.

Key Requirements:

RequirementDescription
Human Review RequiredOnly licensed physicians or qualified healthcare providers may make medical necessity determinations
AI as Tool OnlyAI tools may inform but not independently decide coverage
Strict DeadlinesStandard cases: 5 business days; Urgent: 72 hours; Retrospective: 30 days
EnforcementCalifornia DMHC will audit denial rates and ensure compliance
PenaltiesSignificant administrative penalties for willful violations

Background: According to the California Nurses Association, approximately 26% of insurance claims are denied—one of the factors that inspired the legislation.

National Model for AI Claim Denial Laws

California’s SB 1120 is setting a national precedent. According to Senator Becker, 19 states are now considering similar laws. The legislation doesn’t ban AI in healthcare—it ensures AI complements rather than replaces human decision-making.

Other State Legislative Activity
#

States considering similar AI claim denial regulation:

  • New York — Proposed AI transparency requirements for insurers
  • Pennsylvania — Studying algorithmic denial practices
  • Georgia — Considering medical necessity determination rules
  • Illinois — Proposed AI disclosure requirements

What This Means for Patients:

  • California residents: Denials based solely on AI may be challengeable under SB 1120
  • Other states: Watch for similar legislation in your state
  • Appeals: Cite state law requirements for human review where applicable

Your Rights as a Patient
#

The Appeal Process
#

Internal Appeal:

  1. Request internal review within deadline (often 180 days)
  2. Insurer must have different reviewer than original denial
  3. Provide supporting documentation from physician
  4. Cite CMS guidance if Medicare Advantage plan

External Review:

  1. If internal appeal denied, request external review
  2. Independent third-party reviewer evaluates claim
  3. External decision is binding on insurer
  4. Often more favorable than internal review

Key Deadlines:

  • Internal appeal: Usually 180 days from denial
  • External review: 4 months from internal appeal denial
  • Expedited review: Available if delay would harm health

Why Appeals Succeed
#

90%+ of UnitedHealth nH Predict denials overturned on appeal 80%+ of Cigna PxDx denials overturned on appeal

Appeals succeed because:

  • Algorithms ignore individual circumstances
  • Human reviewers see what algorithms miss
  • Physician documentation supports medical necessity
  • Insurers often can’t defend AI decisions in detail

Don't Accept Denial Without Appealing

The data is clear: most AI-generated denials are wrong. If your claim was denied, appeal. Request your medical records, get a letter from your physician supporting medical necessity, and cite CMS guidance. The odds are heavily in your favor—but only if you appeal.

Documentation to Gather
#

DocumentPurpose
Denial letterShows reason for denial, deadlines
Medical recordsSupports medical necessity
Physician letterExpert opinion on your care needs
Treatment planShows what care was ordered
CMS coverage criteriaIf Medicare Advantage, shows rules
Prior appeal resultsIf previously appealed same issue

Legal Claims Against Insurers#

ERISA Claims (Employer Plans)
#

If your insurance is through an employer, ERISA governs your claims:

Breach of Fiduciary Duty:

  • Insurer must act in beneficiaries’ interest
  • Using AI to maximize denials may breach duty
  • Delegating to algorithm without oversight problematic

Failure to Follow Plan Terms:

  • If plan says clinical staff make decisions, AI violates terms
  • Breach of contract for not following policy language
  • Kisting-Leung case shows this theory can survive dismissal

Limitations:

  • ERISA generally limits damages to denied benefits
  • No punitive damages under ERISA
  • But attorney’s fees available if you win

State Law Claims (Individual/Marketplace Plans)
#

Non-ERISA plans may allow broader claims:

Bad Faith Insurance:

  • Unreasonable denial of valid claims
  • Using AI known to be 90% wrong
  • Failing to investigate claims properly

Consumer Protection:

  • State unfair/deceptive practices laws
  • California requires “thorough, fair and objective” investigation
  • AI batch denials may violate these standards

Breach of Contract:

  • Policy promises coverage for medically necessary care
  • AI denials without individual review breach terms
  • Good faith and fair dealing violations

Medicare Advantage Specific
#

Administrative Appeals:

  • Must often exhaust Medicare appeals before suing
  • Courts may waive if futility or irreparable harm shown
  • Lokken court waived requirement on these grounds

CMS Complaints:

  • File complaints with CMS about plan practices
  • Can trigger regulatory investigation
  • Creates record of insurer misconduct

Evidence Preservation
#

If Your Claim Was Denied
#

Immediate Steps:

  1. Keep the denial letter — Documents reason and deadlines
  2. Request your file — Insurer must provide claim documentation
  3. Get physician statement — Supporting medical necessity
  4. Document communications — Save all calls, emails, letters
  5. Note dates and names — Who you spoke with, when

For Potential Litigation
#

EvidenceWhy It Matters
Complete denial correspondenceShows insurer’s reasoning
Medical recordsProves medical necessity
Physician declarationsExpert support for your care needs
Appeal documentationShows exhaustion of remedies
Policy/plan documentsWhat coverage you were promised
Internal review recordsHow insurer processed your claim

Class Action Considerations
#

If you’re considering joining a class action:

  • Document your specific harm
  • Keep records of out-of-pocket costs
  • Note any health consequences from denial
  • Preserve all communications with insurer
  • Consult attorney about individual vs. class claims

December 2024: Public Scrutiny Intensifies
#

The AI claim denial issue exploded into national consciousness in December 2024 following tragic events that sparked unprecedented public attention to insurance company practices.

A National Reckoning
#

Following the December 4, 2024 killing of UnitedHealthcare CEO Brian Thompson outside a Manhattan investors conference, social media erupted with hostile commentary about the company’s coverage denial practices. Bullet casings found at the scene were reportedly inscribed with words including “deny,” “delay,” and “depose”—terms patients and critics have long associated with insurance company tactics.

Public Response:

  • Millions of social media posts sharing personal claim denial stories
  • Congressional calls for renewed investigation of AI denial practices
  • Increased media scrutiny of nH Predict and similar algorithms
  • Patient advocacy groups amplifying class action awareness
  • State insurance commissioners announcing investigations

Litigation Implications
#

The intense public attention may impact ongoing litigation:

  • Jury pool awareness — Potential jurors now have heightened awareness of AI denial issues
  • Discovery pressure — Public interest may support broader document disclosure
  • Settlement leverage — Insurers face reputational pressure to resolve claims
  • Regulatory action — State and federal regulators face pressure to act

December 2024 Context

While the December 2024 events were tragic and violence is never justified, the public response revealed the depth of anger over AI-driven claim denials. Millions shared personal stories of denied care, delayed treatment, and loved ones who suffered or died after coverage was refused. This national conversation has brought unprecedented attention to the AI denial issue and may accelerate legislative and litigation responses.

Frequently Asked Questions
#


Related Resources#


Insurance Claim Wrongfully Denied?

If your health insurance claim was denied—especially if you suspect AI or algorithmic processing—you may have grounds for appeal or litigation. Connect with attorneys experienced in insurance bad faith, ERISA claims, and emerging AI liability issues to understand your options.

Get Free Consultation

Related