The Fake Case Epidemic#
A troubling pattern has emerged in courtrooms across America: lawyers are being sanctioned—sometimes severely—for submitting legal briefs filled with case citations that don’t exist. These “hallucinated” cases, generated by AI tools like ChatGPT, look convincing but are completely fabricated. The consequences for attorneys range from public humiliation to five-figure fines and suspension from practice.
What began as a novelty case in 2023 has exploded into a daily occurrence. As of late 2025, researchers have documented over 200 court cases involving AI-generated fake citations—and the rate is accelerating from two cases per week to two or three cases per day.
The Science of AI Hallucinations#
Why AI Fabricates Legal Citations#
Large language models like ChatGPT, Claude, and Google Gemini generate text by predicting what words should come next based on patterns in their training data. When asked for legal citations, these systems often produce responses that look like real case names—complete with party names, court identifiers, volume numbers, and page citations—but are entirely invented.
Stanford HAI Research Findings:
Researchers at Stanford’s Human-Centered Artificial Intelligence Institute tested state-of-the-art AI models on legal questions and found alarming hallucination rates:
| AI System | Hallucination Rate | Notes |
|---|---|---|
| General-purpose LLMs | 58-88% | ChatGPT, Claude, Gemini |
| Lexis+ AI | 17%+ | Legal-specific tool |
| Ask Practical Law AI | 17%+ | Legal-specific tool |
| Westlaw AI | Lower but present | Legal-specific tool |
Even purpose-built legal AI tools designed specifically for attorneys still hallucinate at rates exceeding one in six queries.
The “Confident Fabrication” Problem#
AI hallucinations are particularly dangerous in legal contexts because:
- They look authentic — Fabricated citations follow correct formatting conventions
- They include plausible details — Party names, dates, and holdings that seem reasonable
- They’re presented with confidence — No hedging or uncertainty markers
- They may cite real cases incorrectly — Real case names with wrong holdings or quotes
- They resist detection — Standard database searches may not catch subtle fabrications
The Verification Trap
Landmark Sanctions Cases#
Mata v. Avianca: The Case That Started It All#
Court: U.S. District Court, Southern District of New York Judge: P. Kevin Castel Sanction: $5,000 fine Date: June 2023
The case that introduced “ChatGPT lawyer” to the legal lexicon. Attorney Steven Schwartz used ChatGPT to research a personal injury case against Avianca Airlines and submitted a brief citing six cases that didn’t exist. When opposing counsel couldn’t locate the cases, the court investigated.
Judge Castel found that “six of the submitted cases appear to be bogus judicial decisions with bogus quotes and bogus internal citations.” Schwartz and colleague Peter LoDuca were fined $5,000 for submitting briefs with fabricated citations.
Key Takeaway: Schwartz claimed he didn’t know ChatGPT could fabricate cases. The court held that ignorance of AI limitations doesn’t excuse professional responsibility to verify citations.
Mata v. Avianca Airways
Attorney Steven Schwartz submitted brief with six fabricated cases generated by ChatGPT. Court found 'bogus judicial decisions with bogus quotes and bogus internal citations.' First major AI hallucination sanctions case, establishing that attorneys bear full responsibility for verifying AI-generated research.
Lindell Defamation Case (Kachouroff)
Attorneys representing MyPillow CEO Mike Lindell fined $3,000 each for filing brief with 24+ errors including hallucinated cases. Court found AI was used to prepare filing 'filled with mistakes and citations of cases that didn't exist.' Demonstrates ongoing risk despite widespread awareness.
California’s Record Fine#
Court: California Superior Court Attorney: Amir Mostafavi Sanction: $10,000 fine Date: September 2025
A California court issued what appears to be the largest state-court fine over AI fabrications. The court’s opinion stated that 21 of 23 quotes from cases cited in the attorney’s opening brief were completely fabricated.
The ruling specifically noted that the attorney had received multiple opportunities to correct the errors but failed to do so, demonstrating that the fabrications weren’t a one-time oversight but a pattern of inadequate verification.
Colorado Attorney Suspension#
Result: 90-day suspension Date: 2025
A Denver attorney who initially denied using AI accepted a 90-day suspension after investigators discovered text messages to a paralegal admitting that ChatGPT had helped draft a motion with fabrications. The attorney had written that “like an idiot” he hadn’t verified the AI’s work.
The case established that:
- Denying AI use when confronted can compound sanctions
- Text messages and electronic records may reveal AI involvement
- State bars are actively investigating AI misuse
Morgan & Morgan Partner Sanctions#
Firm: Morgan & Morgan (No. 42 U.S. law firm by headcount) Attorneys: Rudwin Ayala (removed from case, $3,000); Morgan and Goody ($1,000 each) Date: 2025
Even large, sophisticated law firms aren’t immune. A partner at Morgan & Morgan was sanctioned $3,000 and removed from the litigation after admitting to incorporating hallucinated AI-generated cases in a brief. Two other attorneys received $1,000 sanctions for inadequate supervision.
California AI Citation Sanction
Attorney Amir Mostafavi fined after court found 21 of 23 case quotes in opening brief were fabricated. Largest known state-court sanction for AI hallucinations. Court noted attorney had multiple opportunities to correct errors but failed to verify citations despite being challenged.
Court Disclosure Requirements#
Federal Courts Leading the Way#
A growing number of federal judges now require attorneys to disclose AI use in court filings:
Mandatory Disclosure Orders:
- Several federal district judges have issued standing orders requiring certification that AI-generated content has been verified
- Some orders require disclosure of which AI tools were used
- Failure to comply can result in sanctions independent of any fabrication
Typical Certification Language:
“Counsel certifies that any use of artificial intelligence in the preparation of this filing has been reviewed for accuracy, and that all citations have been verified against authoritative legal databases.”
State Court Approaches#
State courts are increasingly adopting similar requirements:
| Jurisdiction | Requirement |
|---|---|
| Texas | Several courts require AI disclosure |
| California | Proposed rules under consideration |
| Florida | Judicial guidance issued |
| New York | Individual judge orders |
Check Local Rules
State Bar Ethics Rules#
ABA Guidance#
The American Bar Association has addressed AI use through the lens of existing Model Rules:
Rule 1.1 (Competence): Lawyers must provide competent representation, which includes understanding the limitations of tools used in practice—including AI.
Rule 1.6 (Confidentiality): Inputting client information into AI systems may implicate confidentiality obligations.
Rule 5.3 (Supervision): Lawyers must supervise non-lawyer assistants, potentially including AI systems.
Rule 8.4 (Misconduct): Submitting fabricated citations may constitute conduct involving dishonesty or misrepresentation.
State-Specific Ethics Opinions#
Several state bars have issued formal ethics opinions on AI use:
| State | Key Guidance |
|---|---|
| California | Formal opinion requiring competence in AI tools |
| Florida | Advisory opinion on AI in legal practice |
| New York | Ethics guidance on AI-assisted research |
| Texas | Committee opinion addressing AI verification |
Most opinions emphasize that:
- AI output must be verified like any other research
- Client consent may be required for AI use on matters
- Billing for AI-assisted work raises transparency issues
- Confidential information shouldn’t be input into public AI systems
Liability Exposure for Attorneys#
Professional Malpractice#
Attorneys who submit fabricated citations face malpractice exposure:
Elements of AI-Related Malpractice:
- Duty: Attorney owed client duty of competent representation
- Breach: Submitting unverified AI citations falls below standard of care
- Causation: Client was harmed by the submission (adverse ruling, sanctions)
- Damages: Quantifiable harm (case dismissal, fee forfeiture, client’s damages)
Potential Consequences:
- Client lawsuits for negligent representation
- Fee disgorgement orders
- Professional liability insurance claims
- Premium increases or coverage denials
Court Sanctions#
Beyond malpractice, courts can impose sanctions under various authorities:
| Authority | Potential Sanctions |
|---|---|
| Rule 11 (Federal) | Monetary sanctions, fees, costs |
| 28 U.S.C. § 1927 | Excess costs from unreasonable conduct |
| Inherent Powers | Contempt, case dismissal, fee awards |
| State Equivalents | Vary by jurisdiction |
Bar Discipline#
State bar associations can pursue discipline including:
- Private admonishment
- Public censure
- Suspension from practice
- Disbarment in extreme cases
- CLE requirements on AI competence
Risk Mitigation for Legal Professionals#
Verification Protocols#
Minimum Standards:
- Cross-check every citation against Westlaw, Lexis, or official court databases
- Verify holdings and quotes — AI often cites real cases with wrong information
- Check procedural history — Fabricated cases often have implausible histories
- Read the actual case — Don’t rely on AI summaries alone
Red Flags for Fabricated Citations:
- Case not found in any legal database
- Party names that sound plausible but don’t match any real case
- Holdings that perfectly support your argument (too good to be true)
- Internal citations that also don’t exist
- Procedural posture that doesn’t make sense
Firm-Wide Policies#
Law firms should implement:
- Written AI use policies
- Training on AI limitations and hallucination risks
- Verification requirements before filing
- Disclosure protocols for client communications
- Insurance coverage review for AI-related claims
Documentation Practices#
Protect yourself by documenting:
- Which AI tools were used
- What queries were submitted
- How citations were verified
- Who conducted the verification
- Date and method of verification
Not Just Lawyers: AI Hallucinations Across the Legal System#
Judges Citing Fake Authority#
Researchers have documented at least three instances of judges citing fabricated legal authority in their decisions—likely from AI-assisted research. This raises due process concerns when parties are bound by rulings based on non-existent precedent.
Pro Se Litigants#
Self-represented litigants are increasingly using AI to draft court filings, often without any legal training to recognize fabrications. Courts have begun issuing warnings to pro se filers about AI hallucination risks.
International Cases#
The problem extends beyond U.S. borders:
- Canada (Ko v. Li, 2025): Attorney sanctioned for AI-generated fake citations in matrimonial case
- England (Ayinde v. Haringey, 2025): Judicial review filing contained hallucinated cases
- Australia: Multiple reported incidents under investigation
Frequently Asked Questions#
Related Resources#
- AI Legislation & Regulation — Federal and state AI regulatory frameworks
- Understanding Liability — Product liability and negligence principles
- Evidence Checklist — Documentation best practices
- AI Chatbots — Liability for AI-generated content
Facing Sanctions for AI-Generated Citations?
If you're an attorney facing sanctions, bar discipline, or malpractice claims related to AI hallucinations in legal filings, experienced counsel can help. Connect with attorneys who understand both the technology and the professional responsibility implications.
Get Free Consultation