AI Deepfake Pornography: Your Rights and Legal Options#
Artificial intelligence can now generate sexually explicit images of anyone—without their consent, without their knowledge, and often within seconds. These AI “deepfakes” have created a crisis of nonconsensual intimate imagery (NCII) that overwhelmingly targets women and girls. The statistics are staggering: 96-98% of all deepfake content online is nonconsensual pornography, and nearly 100% of victims are female. The 16 most popular “nudification” websites received over 200 million visits in just the first six months of 2024. Until recently, victims had few legal remedies. That’s changing rapidly—new federal and state laws now provide both criminal penalties and civil causes of action for deepfake victims.
How AI Deepfake Pornography Works#
Modern AI image generation tools can create realistic nude images from any photograph of a clothed person. These tools have been marketed openly as “nudify” or “undress” applications, targeting both celebrities and ordinary people—including children.
The Technology#
Face-Swap Deepfakes: AI maps a victim’s face onto existing pornographic content, making it appear they performed in explicit material.
Synthetic Generation: Generative AI creates entirely new explicit images from scratch, using only clothed photographs of the victim as reference.
“Nudification” Apps: Websites and mobile applications that claim to “remove” clothing from photographs, generating fake nude images of anyone the user uploads.
Real-Time Video: Emerging technology can generate deepfake video in real-time during video calls, enabling new forms of extortion and fraud.
The Scale of the Problem#
The deepfake pornography industry has exploded:
- 96-98% of all deepfake content online is nonconsensual intimate imagery
- 99-100% of deepfake pornography victims are female
- 464% increase in deepfake porn videos produced in 2023 compared to 2022
- 95,820 deepfake videos identified online—a 550% increase from 2019
- 200+ million visits to the top 16 nudification websites in just six months of 2024
- 2.2% of survey respondents across 10 countries reported being personally victimized by deepfake pornography
Who Is Being Targeted#
Celebrities: High-profile women are frequent targets. In January 2024, AI-generated sexually explicit images of Taylor Swift spread virally on X (formerly Twitter), with one post viewed over 47 million times before removal. The incident helped catalyze federal legislation.
Members of Congress: Research by the American Sunlight Project identified over 35,000 mentions of nonconsensual intimate imagery depicting 26 members of Congress—25 women and one man. Women members of Congress were 70 times more likely than men to be targeted.
Students: The crisis has spread to schools. Cases include high school girls in New Jersey, a male student in Washington creating deepfakes of 14- and 15-year-old classmates, and 16 eighth-grade girls at a California middle school targeted by fake nude images. In South Korea, police reported over 800 deepfake sex crime cases by late 2024—up from just 156 in 2021—with most victims and perpetrators being teenagers.
Ordinary Women: Any woman with photos online can become a victim. Perpetrators include ex-partners seeking revenge, harassers, and strangers who find victims through social media.
The Federal Response: TAKE IT DOWN Act#
On May 19, 2025, President Trump signed the “Tools to Address Known Exploitation by Immobilizing Technological Deepfakes on Websites and Networks Act”—the TAKE IT DOWN Act—into law. This landmark legislation is the first federal law addressing AI-generated nonconsensual intimate imagery.
What the Law Prohibits#
The TAKE IT DOWN Act makes it a federal crime to “knowingly publish” without consent:
- Intimate visual depictions of minors
- Intimate visual depictions of non-consenting adults
- Any deepfakes (whether intimate or not) intended to cause harm
The law applies regardless of whether the images are “real” or AI-generated.
Criminal Penalties#
Adult Victims: Up to 2 years in federal prison for publishing nonconsensual intimate imagery
Minor Victims: Up to 3 years in federal prison for publishing nonconsensual intimate imagery of children
Threats Involving Minors: Up to 30 months in federal prison plus fines for threatening to share deepfakes of minors
Platform Requirements#
The Act requires online platforms (including websites and mobile applications) to:
- Establish a notice and takedown process for victims to report nonconsensual imagery
- Remove reported content within 48 hours of a verified request
- Take reasonable steps to prevent re-uploading of removed content
- Comply by May 19, 2026 (platforms have one year to implement)
Enforcement#
The Federal Trade Commission (FTC) enforces the platform requirements under its “deceptive and unfair trade practices” authority. The criminal provisions took effect immediately upon signing.
Exceptions#
The law includes narrow exceptions for:
- Law enforcement or intelligence investigations
- Good-faith disclosures for legal proceedings
- Medical treatment or education
- Reporting unlawful conduct
What the Law Doesn’t Do#
The TAKE IT DOWN Act does not preempt state laws, meaning victims can pursue both federal and state remedies. However, the law does not create a private right of action—victims cannot directly sue under the federal law. For civil damages, victims must rely on state laws.
State Laws: Civil Remedies for Victims#
A patchwork of state laws now provides civil causes of action for deepfake victims. Key states include:
Illinois: Digital Forgeries Act#
Illinois amended its Civil Remedies for Nonconsensual Dissemination of Private Sexual Images Act to expressly cover AI-generated deepfakes. Effective January 1, 2024:
Damages: Victims can recover:
- Economic and noneconomic damages, OR
- $10,000 in statutory damages against each defendant found liable
Key Provisions:
- Covers both creation and distribution of “digitally altered sexual images”
- Applies to AI-generated content even if no “real” images were distributed
- Disclaimer is not a defense—disclosing that an image was AI-generated doesn’t shield the creator
California: AB 602#
California’s AB 602, effective January 1, 2020, creates a private cause of action against anyone who:
- Creates and intentionally discloses sexually explicit deepfake material without consent, OR
- Intentionally discloses deepfake material they didn’t create, knowing the victim didn’t consent
Damages: Victims can recover:
- Disgorgement of any profits the defendant made
- Economic and noneconomic damages
- Up to $150,000 in statutory damages if the act was committed “with malice”
- Attorney’s fees
2024-2025 Amendments: Following San Francisco’s enforcement campaign, California updated AB 602 to:
- Expressly cover “nudification” websites
- Grant enforcement authority to public prosecutors (Attorney General, city attorneys, district attorneys)
Important: Under California law, altered pornography “is not of newsworthy value solely because the depicted individual is a public figure”—celebrity status provides no defense.
Other State Laws#
Florida: Civil cause of action for deepfake intimate imagery with statutory damages
Minnesota: One of the first states to explicitly criminalize deepfake pornography; civil remedies available
New York: Includes deepfakes within its revenge porn civil remedy statute
Texas: Civil cause of action for unlawful disclosure of intimate visual material, covering deepfakes
Virginia: Criminal penalties for deepfake pornography; civil suits available under broader harassment statutes
As of 2024, deepfake legislation was pending in at least 40 states, with 20 bills passed. The legal landscape is evolving rapidly.
Enforcement Actions: The San Francisco Campaign#
San Francisco City Attorney David Chiu has led the most aggressive enforcement campaign against deepfake pornography platforms in the United States.
The August 2024 Lawsuit#
In August 2024, Chiu filed a landmark lawsuit against the operators of the world’s 16 most-visited nudification websites. The complaint alleged violations of:
- State and federal laws prohibiting deepfake pornography
- Image-based abuse statutes
- Child pornography laws (for generating fake images of minors)
- California’s Unfair Competition Law
Defendants Named:
- Estonian resident Augustin Gribinets
- Briver LLC (Santa Fe, New Mexico)
- Sol Ecom, Inc. (American corporation)
- Itai Tech Ltd. (U.K.)
- Defirex OÜ and Itai OÜ (Estonian businesses)
Briver LLC Settlement (2025)#
On May 30, 2025, San Francisco Superior Court approved a settlement with Briver LLC:
Settlement Terms:
- Permanent injunction prohibiting Briver and its owners from operating any deepfake pornography websites
- $100,000 in civil penalties paid to the City Attorney’s Office
- Complete shutdown of all Briver-operated sites
Before being shut down, Briver’s websites allowed users to create nonconsensual pornographic images of both adults and children.
Ongoing Results#
As of June 2025, 10 of the world’s most-visited nudification websites are now offline or blocked in California as a result of the enforcement campaign. The lawsuit continues against remaining defendants, including newly identified websites and operators.
Why This Matters#
The San Francisco campaign demonstrates that:
- Deepfake pornography platforms can be identified and shut down
- Civil enforcement under consumer protection laws is viable
- Operators face real financial consequences even when overseas
- Public pressure combined with litigation works
Other jurisdictions are now studying San Francisco’s approach for their own enforcement actions.
Building a Strong Case#
If you’ve been victimized by AI deepfake pornography:
1. Preserve Evidence Immediately#
This is critical and time-sensitive:
- Screenshot everything: The images themselves, the URLs, any identifying information about the platform or uploader
- Document spread: Note every platform where the images appear, view counts, share counts
- Save communications: Any messages from the perpetrator, platform responses, or harassment related to the images
- Use archive tools: Services like Archive.org’s Wayback Machine can preserve evidence before it’s removed
- Record timestamps: Document when you first discovered the images and each subsequent instance
2. Report to Platforms#
Request removal through each platform’s reporting system:
- Most major platforms (Google, Meta, X) have specific reporting forms for nonconsensual intimate imagery
- Under the TAKE IT DOWN Act (after May 2026), platforms must remove content within 48 hours
- Keep records of all removal requests and platform responses
- Note any delays or refusals—these may support claims against platforms
3. Report to Law Enforcement#
File reports with:
- Local police: For state criminal charges
- FBI: For potential federal charges under the TAKE IT DOWN Act (especially for minor victims or cross-state distribution)
- Your state attorney general: Many have dedicated units for cyber exploitation
Provide all preserved evidence when filing reports.
4. File Civil Claims#
You may have civil claims against:
The Creator: If identifiable, the person who generated the deepfake images faces direct liability under state civil remedy statutes
The Distributor: Anyone who shared the images knowing they were nonconsensual (even if they didn’t create them)
Platform Operators: Website operators hosting nudification tools or distributing nonconsensual content may face liability under state consumer protection laws
Payment Processors: Some victims have succeeded in pressuring payment processors (Visa, Mastercard, PayPal) to cut off deepfake websites
5. Seek Removal Orders#
Consider petitioning courts for:
- Temporary restraining orders requiring removal
- Preliminary injunctions against further distribution
- Permanent injunctions as part of civil judgments
Some states allow expedited proceedings for image-based abuse cases.
6. Document Your Harm#
Build a record of the impact:
- Mental health treatment: Therapy records, psychiatric evaluations
- Employment effects: Lost jobs, failed background checks, workplace harassment
- Educational impact: School disruption, social isolation
- Relationship damage: Effects on family, romantic relationships
- Physical symptoms: Sleep disorders, anxiety, depression, PTSD
This documentation supports damages claims in civil litigation.
7. Consult Specialized Attorneys#
Deepfake cases require expertise in:
- Image-based abuse and cyber exploitation law
- State and federal privacy statutes
- Platform liability and Section 230
- Digital forensics and evidence preservation
- International jurisdiction (many platforms operate overseas)
Organizations like the Cyber Civil Rights Initiative can provide referrals to experienced attorneys.
Questions to Ask After Discovering Deepfakes#
When investigating your case:
- Where did the images originate? Can the creator be identified?
- What platforms are hosting the content? Are they U.S.-based?
- Has the content spread to multiple sites? How widely?
- Is there any watermarking or metadata that could identify the source?
- Did any platforms fail to remove content after you reported it?
- Are you a minor? (Different laws and penalties apply)
- In which states might you have venue for a civil lawsuit?
- Has the imagery been used for extortion, harassment, or fraud?
- Are any commercial websites monetizing the content?
The Impact on Victims#
The harm from deepfake pornography extends far beyond embarrassment:
Psychological Effects: Victims report severe anxiety, depression, PTSD, low self-esteem, and social withdrawal. Research shows 41% of women ages 18-29 self-censor online to avoid harassment.
Professional Consequences: Deepfake images have cost victims jobs, damaged professional reputations, and appeared in background checks.
Safety Risks: Images often accompany doxxing (publication of home address, workplace, etc.), leading to stalking and physical threats.
Relationship Damage: Victims describe family estrangement, broken relationships, and social isolation.
Ongoing Violation: Unlike some crimes, deepfake images can resurface indefinitely, re-traumatizing victims years later.
As one advocacy organization noted: “These videos and photos may be fake, but the emotional impacts are real, from disruptions to education, barriers to or loss of employment, social isolation, severe mental health consequences including PTSD, depression and even suicide.”
The Future of Deepfake Liability#
This area of law is evolving rapidly:
Federal Expansion: The TAKE IT DOWN Act is likely the first of several federal laws. Congress is considering additional legislation with private rights of action and enhanced penalties.
State Law Growth: With legislation pending in 40+ states, expect more robust civil remedies nationwide within the next two years.
Platform Accountability: After May 2026, platforms face FTC enforcement for failure to comply with takedown requirements. Class action litigation against major platforms is likely.
International Coordination: The EU has moved toward criminalizing deepfake pornography by 2027. Cross-border enforcement cooperation is increasing.
Technical Solutions: Detection tools are improving, though they remain in an arms race with generation technology. Some platforms now use AI to identify and block deepfakes.
Creator Accountability: As forensic technology improves, identifying anonymous creators becomes more feasible, making civil claims more practical.
For now, victims of AI deepfake pornography should understand that legal remedies exist and are strengthening. The combination of federal criminal law, state civil remedies, and aggressive enforcement actions like San Francisco’s campaign means that perpetrators and platforms face real consequences—and victims have genuine paths to justice.
Related Resources#
- AI Chatbots - AI-caused psychological harm and companion chatbot liability
- AI Hiring Discrimination - Algorithmic bias in employment screening
- Security AI - Surveillance, facial recognition, and privacy concerns
- Contact Us - Get help understanding your legal options
This information is for educational purposes and does not constitute legal advice. AI deepfake cases involve complex interactions between federal criminal law, state civil remedies, platform liability, and rapidly evolving technology regulations. Consult with qualified legal professionals to understand your rights.
If you are a victim of nonconsensual intimate imagery, the Cyber Civil Rights Initiative (cybercivilrights.org) provides resources and support.

