When AI Makes Up Lies About You#
Artificial intelligence systems are generating false, defamatory statements about real people at an unprecedented scale. ChatGPT has accused innocent people of embezzlement and fraud. Bing’s AI has confused professionals with convicted terrorists. Google’s AI Overviews have made false claims about public figures. When AI fabricates accusations of crimes, sexual misconduct, professional incompetence, or other reputation-destroying falsehoods, victims face a novel legal question: Can you sue an AI company for defamation?
The answer is evolving—and the first major cases are now working through the courts.
How AI Defamation Happens#
Hallucination: Fabricating False Facts#
Hallucination occurs when an AI system generates information that is completely fabricated—events that never happened, accusations never made, credentials never earned.
Example — Walters v. OpenAI: A journalist asked ChatGPT to summarize a real federal lawsuit. ChatGPT fabricated a legal complaint accusing radio host Mark Walters of embezzling funds from the Second Amendment Foundation. None of it was true:
- Walters was never named in the actual lawsuit
- He had never worked for the organization
- The embezzlement allegation was entirely invented by ChatGPT
Juxtaposition: Confusing Identities#
Juxtaposition (or identity conflation) occurs when AI merges information about different people, applying true facts about one person to another with a similar name.
Example — Battle v. Microsoft: When users searched for aerospace educator Jeffery Battle, Bing’s AI provided information about Jeffrey Leon Battle—a convicted terrorist. The crimes were real. The terrorist was real. But applying those facts to the professor created devastating false accusations.
| Type | What AI Does | Example |
|---|---|---|
| Hallucination | Fabricates facts entirely | ChatGPT invents embezzlement allegation |
| Juxtaposition | Confuses similar names | Bing confuses professor with terrorist |
| Misattribution | Attributes quotes/actions to wrong person | AI credits statement to wrong expert |
| Context collapse | Presents true facts misleadingly | AI presents acquittal as conviction |
Why AI Defamation Is Different
Landmark Cases#
Walters v. OpenAI (Georgia, 2025)#
The first ChatGPT defamation case—and the first dismissal.
Walters v. OpenAI
Radio host Mark Walters sued after ChatGPT fabricated a legal complaint accusing him of embezzling from a gun rights organization. Court granted summary judgment to OpenAI, finding the journalist who received the output could not have reasonably believed the fabricated accusation was true given ChatGPT's warnings about potential inaccuracies.
What Happened: On May 3, 2023, an editor at AmmoLand.com asked ChatGPT to summarize a real federal lawsuit. ChatGPT generated a false summary claiming Walters had embezzled funds—an accusation entirely invented by the AI.
Why OpenAI Won:
The Georgia Superior Court granted summary judgment to OpenAI on May 19, 2025, finding:
No “reasonable reader” would believe: The editor received multiple warnings that ChatGPT could not access the internet, could not view the linked complaint, and that the lawsuit was filed after ChatGPT’s knowledge cutoff date.
Red flags ignored: These warnings should have alerted a reasonable person that the AI’s output might be inaccurate.
No reputational damage: The editor who received the false statement didn’t think less of Walters as a result—he recognized something was wrong.
Fault not proven: Under defamation law, plaintiffs must prove the defendant acted with fault (negligence for private figures, actual malice for public figures). The court found insufficient evidence of fault given OpenAI’s warnings.
What Walters Does NOT Mean:
- AI hallucinations can still be defamatory — The case didn’t hold that AI outputs can never be defamatory
- Different facts could win — If the recipient had believed and republished the false statement, the outcome might differ
- Section 230 not decided — The court ruled on traditional defamation grounds, not Section 230 immunity
Key Takeaway from Walters
Battle v. Microsoft (Maryland, 2023-Present)#
The $25 million identity confusion case—now in arbitration.
Battle v. Microsoft
Aerospace educator Jeffery Battle sued Microsoft after Bing's AI confused him with convicted terrorist Jeffrey Leon Battle. When users searched Battle's name, the AI returned information about the terrorist's crimes. Microsoft moved to compel arbitration; the court granted the motion in October 2024. The case will be resolved privately.
What Happened: In July 2023, Professor Jeffery Battle discovered that searching his name in Bing returned AI-generated content conflating him with Jeffrey Leon Battle, a member of the “Portland Seven” convicted of terrorism-related charges in 2003.
Legal Theory: Unlike Walters (pure fabrication), Battle involves misattribution of true facts. The terrorist’s crimes were real—but attributing them to an innocent professor with a similar name is defamatory.
Current Status: On October 23, 2024, the court granted Microsoft’s motion to compel arbitration. The case has been stayed pending private arbitration, meaning:
- No public trial
- No published court opinion
- No precedent set for AI identity confusion cases
- Resolution will be confidential
The Legal Framework for AI Defamation#
Elements of Defamation#
To win a defamation claim against an AI company, you must prove:
| Element | What You Must Show | AI-Specific Challenges |
|---|---|---|
| Publication | Statement communicated to third party | Did AI “publish” or did user? |
| False Statement of Fact | Not opinion, not true | AI may present fabrications as facts |
| About the Plaintiff | Identifies you specifically | AI may use your name but mean someone else |
| Fault | Negligence (private) or actual malice (public) | Can algorithms have “fault”? |
| Damages | Harm to reputation | Must show actual reputational injury |
The Fault Requirement#
Defamation requires proving the defendant acted with fault—the level depends on who you are:
Private Figures: Must prove negligence—that the defendant failed to exercise reasonable care.
Public Figures: Must prove actual malice—that the defendant knew the statement was false or acted with reckless disregard for the truth. (From New York Times v. Sullivan, 1964)
For AI Companies:
- Did warnings about inaccuracy constitute “reasonable care”?
- Does knowing AI hallucinates constitute “reckless disregard”?
- Can algorithms have the mental state required for “actual malice”?
These questions remain unsettled.
Section 230: The Uncertain Shield#
Section 230 of the Communications Decency Act (47 U.S.C. § 230) generally immunizes online platforms from liability for content provided by users. But does it protect AI-generated content?
Arguments Against Section 230 Protection:
| Argument | Explanation |
|---|---|
| AI creates content | Section 230 protects platforms for “information provided by another”—AI content is provided by the AI, not users |
| Not “neutral conduit” | AI doesn’t just host content; it generates it |
| Developer responsibility | Training data, architecture, and deployment choices make AI companies “information content providers” |
Arguments For Section 230 Protection:
| Argument | Explanation |
|---|---|
| User-initiated | Output depends on user prompts; AI is a tool |
| Training data | AI reflects patterns in third-party data, not original creation |
| Tool, not publisher | AI is more like a search engine than an author |
Current Status:
- No appellate court has definitively ruled on Section 230 for generative AI
- Walters avoided Section 230 entirely, ruling on traditional defamation grounds
- OpenAI has not invoked Section 230 in defamation cases—likely recognizing it’s weak for AI-generated content
- Legal consensus emerging: Section 230 probably doesn’t protect pure AI hallucinations
Section 230 Is NOT a Guarantee
State Law Variations#
Contributory Negligence States#
Four states follow pure contributory negligence—if you’re even 1% at fault, you recover nothing:
| State | Rule | Impact on AI Defamation |
|---|---|---|
| North Carolina | Pure contributory | User who didn’t verify AI output may be barred |
| Virginia | Pure contributory | Same |
| Maryland | Pure contributory | Battle case jurisdiction |
| Alabama | Pure contributory | Same |
| District of Columbia | Pure contributory | Same |
In these jurisdictions, if a court finds you contributed to your own harm by not fact-checking AI output (as a publisher), you may recover nothing.
Comparative Negligence States#
Most states use comparative negligence—your recovery is reduced by your percentage of fault but not eliminated unless you’re 50%+ at fault (modified) or any percentage (pure).
State Defamation Statutes#
Some states have specific defamation protections:
- California: Anti-SLAPP statute may apply to AI company defense motions
- New York: Plaintiff-friendly defamation standards
- Texas: New AI liability provisions under Texas Responsible AI Governance Act (June 2025)
Building a Case Against AI Companies#
Evidence to Preserve#
| Evidence Type | Why It Matters |
|---|---|
| Screenshots of AI output | Proves the defamatory statement existed |
| Timestamps | Establishes when statement was generated |
| Prompts used | Shows what user asked for vs. what AI provided |
| Warning messages (or lack) | Relevant to fault analysis |
| Publication evidence | Shows third parties saw the content |
| Reputational harm | Lost business, social consequences, emotional distress |
Immediate Steps#
- Screenshot everything — Capture the exact AI output before it changes
- Document the prompt — Record exactly what was asked
- Note the platform/model — ChatGPT, Bing AI, Google AI, Claude, etc.
- Preserve context — Were warnings displayed? What was the user interface?
- Track republication — Did others share or rely on the false information?
- Document harm — Lost opportunities, damaged relationships, emotional impact
Challenges in AI Defamation Cases#
Identifying the Defendant:
- Is it the AI model developer (OpenAI, Google)?
- The platform deploying it (Microsoft Bing)?
- The user who prompted it?
- All of the above?
Proving Fault:
- AI companies argue their warnings constitute reasonable care
- Proving “actual malice” for public figures is extremely difficult
- Algorithmic decision-making doesn’t fit traditional fault frameworks
Forced Arbitration:
- Many AI platforms require arbitration in terms of service
- Battle v. Microsoft was moved to private arbitration
- Class actions may be precluded
Legislative Developments#
Federal Activity#
No Section 230 Immunity for AI Act (2023):
- Introduced by Senators Hawley (R-MO) and Blumenthal (D-CT)
- Would have waived Section 230 immunity for AI-generated content
- Did not pass due to concerns about chilling AI innovation
Artificial Intelligence Risk Evaluation Act (2025):
- Introduced by Senator Hawley
- Addresses AI risks but doesn’t specifically cover defamation
State Legislation#
Texas Responsible AI Governance Act (June 2025):
- Establishes liability with fines up to $200,000 per violation
- Covers intentional abuses: facilitating crimes, creating deepfakes, unlawful discrimination
- Does not explicitly cover defamation but signals regulatory attention
California AI Legislation:
- Various AI bills passed, primarily focused on transparency and discrimination
- Defamation not specifically addressed
Legislative Gap
Frequently Asked Questions#
Related Practice Areas#
- AI Chatbots — AI-caused psychological harm and liability
- Medical AI — Healthcare AI misdiagnosis and treatment errors
- AI Hiring Discrimination — Algorithmic employment discrimination
Related Resources#
- Legal AI Hallucinations — When AI makes up fake case citations
- AI Legislation & Regulation — Federal and state AI laws
- Understanding Liability — General liability principles
Has AI Defamed You?
When ChatGPT, Bing, or other AI systems generate false accusations about you—fabricating crimes you didn't commit, confusing you with criminals, or inventing professional misconduct—you may have legal recourse. While AI defamation law is still evolving, the first cases are establishing important precedents. Connect with attorneys who understand both defamation law and AI technology.
Get Free Consultation