Skip to main content
  1. Resources/

Legal AI Hallucinations: When Lawyers Cite Fake Cases

Table of Contents

The Fake Case Epidemic
#

A troubling pattern has emerged in courtrooms across America: lawyers are being sanctioned—sometimes severely—for submitting legal briefs filled with case citations that don’t exist. These “hallucinated” cases, generated by AI tools like ChatGPT, look convincing but are completely fabricated. The consequences for attorneys range from public humiliation to five-figure fines and suspension from practice.

What began as a novelty case in 2023 has exploded into a daily occurrence. As of late 2025, researchers have documented over 200 court cases involving AI-generated fake citations—and the rate is accelerating from two cases per week to two or three cases per day.

206+
Documented Cases
AI hallucination incidents in courts
17-88%
Hallucination Rate
Legal AI tools error frequency
$10,000
Highest State Fine
California AI citation sanction
90 Days
Attorney Suspension
Colorado AI misconduct case

The Science of AI Hallucinations
#

Why AI Fabricates Legal Citations#

Large language models like ChatGPT, Claude, and Google Gemini generate text by predicting what words should come next based on patterns in their training data. When asked for legal citations, these systems often produce responses that look like real case names—complete with party names, court identifiers, volume numbers, and page citations—but are entirely invented.

Stanford HAI Research Findings:

Researchers at Stanford’s Human-Centered Artificial Intelligence Institute tested state-of-the-art AI models on legal questions and found alarming hallucination rates:

AI SystemHallucination RateNotes
General-purpose LLMs58-88%ChatGPT, Claude, Gemini
Lexis+ AI17%+Legal-specific tool
Ask Practical Law AI17%+Legal-specific tool
Westlaw AILower but presentLegal-specific tool

Even purpose-built legal AI tools designed specifically for attorneys still hallucinate at rates exceeding one in six queries.

The “Confident Fabrication” Problem
#

AI hallucinations are particularly dangerous in legal contexts because:

  • They look authentic — Fabricated citations follow correct formatting conventions
  • They include plausible details — Party names, dates, and holdings that seem reasonable
  • They’re presented with confidence — No hedging or uncertainty markers
  • They may cite real cases incorrectly — Real case names with wrong holdings or quotes
  • They resist detection — Standard database searches may not catch subtle fabrications

The Verification Trap

Some AI systems, when asked to verify their own citations, will confidently confirm that fabricated cases exist—or generate additional fake citations as “supporting” authority. Never rely on AI to verify AI-generated citations.

Landmark Sanctions Cases
#

Mata v. Avianca: The Case That Started It All
#

Court: U.S. District Court, Southern District of New York Judge: P. Kevin Castel Sanction: $5,000 fine Date: June 2023

The case that introduced “ChatGPT lawyer” to the legal lexicon. Attorney Steven Schwartz used ChatGPT to research a personal injury case against Avianca Airlines and submitted a brief citing six cases that didn’t exist. When opposing counsel couldn’t locate the cases, the court investigated.

Judge Castel found that “six of the submitted cases appear to be bogus judicial decisions with bogus quotes and bogus internal citations.” Schwartz and colleague Peter LoDuca were fined $5,000 for submitting briefs with fabricated citations.

Key Takeaway: Schwartz claimed he didn’t know ChatGPT could fabricate cases. The court held that ignorance of AI limitations doesn’t excuse professional responsibility to verify citations.

Sanctions for Fake Citations

Mata v. Avianca Airways

$5,000
Sanctions Order

Attorney Steven Schwartz submitted brief with six fabricated cases generated by ChatGPT. Court found 'bogus judicial decisions with bogus quotes and bogus internal citations.' First major AI hallucination sanctions case, establishing that attorneys bear full responsibility for verifying AI-generated research.

S.D.N.Y. 2023
Sanctions for AI Errors

Lindell Defamation Case (Kachouroff)

$6,000
Sanctions Order

Attorneys representing MyPillow CEO Mike Lindell fined $3,000 each for filing brief with 24+ errors including hallucinated cases. Court found AI was used to prepare filing 'filled with mistakes and citations of cases that didn't exist.' Demonstrates ongoing risk despite widespread awareness.

D. Colorado 2025

California’s Record Fine
#

Court: California Superior Court Attorney: Amir Mostafavi Sanction: $10,000 fine Date: September 2025

A California court issued what appears to be the largest state-court fine over AI fabrications. The court’s opinion stated that 21 of 23 quotes from cases cited in the attorney’s opening brief were completely fabricated.

The ruling specifically noted that the attorney had received multiple opportunities to correct the errors but failed to do so, demonstrating that the fabrications weren’t a one-time oversight but a pattern of inadequate verification.

Colorado Attorney Suspension
#

Result: 90-day suspension Date: 2025

A Denver attorney who initially denied using AI accepted a 90-day suspension after investigators discovered text messages to a paralegal admitting that ChatGPT had helped draft a motion with fabrications. The attorney had written that “like an idiot” he hadn’t verified the AI’s work.

The case established that:

  • Denying AI use when confronted can compound sanctions
  • Text messages and electronic records may reveal AI involvement
  • State bars are actively investigating AI misuse

Morgan & Morgan Partner Sanctions
#

Firm: Morgan & Morgan (No. 42 U.S. law firm by headcount) Attorneys: Rudwin Ayala (removed from case, $3,000); Morgan and Goody ($1,000 each) Date: 2025

Even large, sophisticated law firms aren’t immune. A partner at Morgan & Morgan was sanctioned $3,000 and removed from the litigation after admitting to incorporating hallucinated AI-generated cases in a brief. Two other attorneys received $1,000 sanctions for inadequate supervision.

Largest State Fine

California AI Citation Sanction

$10,000
Sanctions Order

Attorney Amir Mostafavi fined after court found 21 of 23 case quotes in opening brief were fabricated. Largest known state-court sanction for AI hallucinations. Court noted attorney had multiple opportunities to correct errors but failed to verify citations despite being challenged.

California 2025

Court Disclosure Requirements
#

Federal Courts Leading the Way
#

A growing number of federal judges now require attorneys to disclose AI use in court filings:

Mandatory Disclosure Orders:

  • Several federal district judges have issued standing orders requiring certification that AI-generated content has been verified
  • Some orders require disclosure of which AI tools were used
  • Failure to comply can result in sanctions independent of any fabrication

Typical Certification Language:

“Counsel certifies that any use of artificial intelligence in the preparation of this filing has been reviewed for accuracy, and that all citations have been verified against authoritative legal databases.”

State Court Approaches
#

State courts are increasingly adopting similar requirements:

JurisdictionRequirement
TexasSeveral courts require AI disclosure
CaliforniaProposed rules under consideration
FloridaJudicial guidance issued
New YorkIndividual judge orders

Check Local Rules

AI disclosure requirements vary significantly by court and even by individual judge. Before filing, attorneys should check standing orders and local rules for the specific court and judge assigned to their case.

State Bar Ethics Rules
#

ABA Guidance
#

The American Bar Association has addressed AI use through the lens of existing Model Rules:

Rule 1.1 (Competence): Lawyers must provide competent representation, which includes understanding the limitations of tools used in practice—including AI.

Rule 1.6 (Confidentiality): Inputting client information into AI systems may implicate confidentiality obligations.

Rule 5.3 (Supervision): Lawyers must supervise non-lawyer assistants, potentially including AI systems.

Rule 8.4 (Misconduct): Submitting fabricated citations may constitute conduct involving dishonesty or misrepresentation.

State-Specific Ethics Opinions
#

Several state bars have issued formal ethics opinions on AI use:

StateKey Guidance
CaliforniaFormal opinion requiring competence in AI tools
FloridaAdvisory opinion on AI in legal practice
New YorkEthics guidance on AI-assisted research
TexasCommittee opinion addressing AI verification

Most opinions emphasize that:

  • AI output must be verified like any other research
  • Client consent may be required for AI use on matters
  • Billing for AI-assisted work raises transparency issues
  • Confidential information shouldn’t be input into public AI systems

Liability Exposure for Attorneys
#

Professional Malpractice
#

Attorneys who submit fabricated citations face malpractice exposure:

Elements of AI-Related Malpractice:

  1. Duty: Attorney owed client duty of competent representation
  2. Breach: Submitting unverified AI citations falls below standard of care
  3. Causation: Client was harmed by the submission (adverse ruling, sanctions)
  4. Damages: Quantifiable harm (case dismissal, fee forfeiture, client’s damages)

Potential Consequences:

  • Client lawsuits for negligent representation
  • Fee disgorgement orders
  • Professional liability insurance claims
  • Premium increases or coverage denials

Court Sanctions
#

Beyond malpractice, courts can impose sanctions under various authorities:

AuthorityPotential Sanctions
Rule 11 (Federal)Monetary sanctions, fees, costs
28 U.S.C. § 1927Excess costs from unreasonable conduct
Inherent PowersContempt, case dismissal, fee awards
State EquivalentsVary by jurisdiction

Bar Discipline
#

State bar associations can pursue discipline including:

  • Private admonishment
  • Public censure
  • Suspension from practice
  • Disbarment in extreme cases
  • CLE requirements on AI competence

Risk Mitigation for Legal Professionals#

Verification Protocols
#

Minimum Standards:

  1. Cross-check every citation against Westlaw, Lexis, or official court databases
  2. Verify holdings and quotes — AI often cites real cases with wrong information
  3. Check procedural history — Fabricated cases often have implausible histories
  4. Read the actual case — Don’t rely on AI summaries alone

Red Flags for Fabricated Citations:

  • Case not found in any legal database
  • Party names that sound plausible but don’t match any real case
  • Holdings that perfectly support your argument (too good to be true)
  • Internal citations that also don’t exist
  • Procedural posture that doesn’t make sense

Firm-Wide Policies
#

Law firms should implement:

  • Written AI use policies
  • Training on AI limitations and hallucination risks
  • Verification requirements before filing
  • Disclosure protocols for client communications
  • Insurance coverage review for AI-related claims

Documentation Practices
#

Protect yourself by documenting:

  • Which AI tools were used
  • What queries were submitted
  • How citations were verified
  • Who conducted the verification
  • Date and method of verification

Not Just Lawyers: AI Hallucinations Across the Legal System#

Judges Citing Fake Authority
#

Researchers have documented at least three instances of judges citing fabricated legal authority in their decisions—likely from AI-assisted research. This raises due process concerns when parties are bound by rulings based on non-existent precedent.

Pro Se Litigants
#

Self-represented litigants are increasingly using AI to draft court filings, often without any legal training to recognize fabrications. Courts have begun issuing warnings to pro se filers about AI hallucination risks.

International Cases
#

The problem extends beyond U.S. borders:

  • Canada (Ko v. Li, 2025): Attorney sanctioned for AI-generated fake citations in matrimonial case
  • England (Ayinde v. Haringey, 2025): Judicial review filing contained hallucinated cases
  • Australia: Multiple reported incidents under investigation

Frequently Asked Questions
#


Related Resources#


Facing Sanctions for AI-Generated Citations?

If you're an attorney facing sanctions, bar discipline, or malpractice claims related to AI hallucinations in legal filings, experienced counsel can help. Connect with attorneys who understand both the technology and the professional responsibility implications.

Get Free Consultation

Related