Skip to main content
  1. Industries/

AI Voice Cloning Fraud & Liability

2470 words·12 mins
Table of Contents

AI Voice Cloning Fraud: Your Rights and Legal Options#

Three seconds. That’s all it takes for today’s AI to clone your voice with 85% accuracy. The source audio can be scraped from a voicemail greeting, a social media video, a work presentation, or a podcast appearance. Armed with this synthetic voice, fraudsters are impersonating family members in fake emergencies, executives authorizing wire transfers, and loved ones in distress—stealing hundreds of millions of dollars from victims who have virtually no legal remedy after the fact. When Philadelphia attorney Gary Schildhorn testified before the Senate about an AI scam that used his son’s cloned voice, he delivered a devastating assessment: “In this case, there is no remedy.”

How AI Voice Cloning Fraud Works
#

Voice cloning technology has become remarkably accessible and frighteningly accurate. What once required hours of audio samples and sophisticated equipment now takes seconds with free consumer software.

The Technology
#

Real-Time Voice Synthesis: Modern AI can generate speech in a cloned voice instantly during phone calls, allowing fraudsters to have live conversations as their victim’s family member or colleague.

Text-to-Speech Cloning: Software converts typed text into audio using a target’s voice characteristics, enabling pre-recorded scam calls and voicemails.

Voice Conversion: AI can transform one speaker’s voice to sound like another in real-time, allowing scammers to speak naturally while sounding like someone else.

Minimal Training Data: Unlike earlier technology that required extensive voice samples, current AI cloning tools need only 3 seconds of audio to create a voice match that’s 85% accurate. Longer samples produce near-perfect clones.

The Scale of the Problem
#

AI voice cloning fraud has exploded into a global crisis:

  • $200+ million in deepfake-enabled fraud losses in Q1 2025 alone
  • 845,806 imposter scam reports filed with the FTC in 2024, resulting in $2.95 billion in losses
  • 442% surge in voice-based phishing (vishing) attacks from H1 to H2 2024
  • 77% of voice clone scam victims who were targeted lost money
  • Fewer than 5% of funds lost to sophisticated vishing scams are ever recovered
  • Fraud losses from generative AI expected to grow from $12.3 billion (2024) to $40 billion by 2027

Types of Voice Cloning Scams
#

Family Emergency Scams (Grandparent Scams): The most devastating form targets elderly victims. Scammers clone a grandchild’s or child’s voice and call pretending to be in jail, injured in an accident, or kidnapped. The emotional urgency overwhelms critical thinking.

CEO Fraud / Business Email Compromise: Criminals clone an executive’s voice to authorize fraudulent wire transfers. The Arup case—where a finance worker transferred $25 million after a video call with deepfaked executives—demonstrates the sophistication of these attacks.

Romance and Relationship Scams: Scammers use voice cloning to deepen fake online relationships, convincing victims they’re speaking to someone they’ve developed feelings for.

Debt Collection and IRS Scams: Fraudulent callers use authoritative cloned voices to threaten victims with arrest or legal action unless they pay immediately.

Kidnapping and Ransom Scams: Criminals clone a loved one’s voice screaming or crying, demanding ransom while the “victim” is actually safe.

The $25 Million Wake-Up Call: The Arup Deepfake
#

In February 2024, a finance worker at Arup—a prestigious 78-year-old British engineering firm with 18,000 employees worldwide—received an email from the company’s CFO requesting confidential transactions. Suspicious, the employee initially dismissed it as phishing.

Then came the video call.

The finance worker joined a conference with what appeared to be the CFO and several colleagues. They looked right. They sounded right. The employee recognized their faces and voices. Reassured, he followed instructions to make 15 transfers totaling $25 million to five Hong Kong bank accounts.

Every person on that call was an AI-generated deepfake.

Hong Kong police determined the fraudsters created the deepfakes using publicly available video and audio from online conferences and company meetings. As Arup’s Chief Information Officer Rob Greig later explained: “What happened at Arup—I would call it technology-enhanced social engineering. It wasn’t even a cyberattack in the purest sense. None of our systems were compromised and there was no data affected.”

The Arup case represents the new frontier of AI fraud: sophisticated enough to fool trained professionals, devastating enough to steal tens of millions, and recoverable in almost no instances.

The Elder Fraud Crisis
#

Voice cloning has supercharged scams targeting older adults, who already suffer disproportionate fraud losses:

The Numbers
#

  • $700 million lost by older adults to imposter scams in 2024
  • Four-fold increase since 2020 in reports from older adults losing $10,000+ to impersonation scams
  • Eight-fold increase in losses over $100,000, from $55 million (2020) to $445 million (2024)
  • Adults in their 60s reported $1.18 billion stolen through fraud—more than any other age group
  • 41% of older adults losing $10,000+ were initially contacted by phone

Why Elders Are Targeted
#

Trust in Phone Calls: Older Americans grew up when phone calls were the primary communication method. They’re more likely to answer unknown numbers and trust voices.

Emotional Vulnerability: Scams impersonating grandchildren exploit powerful emotional bonds. When an “injured grandchild” calls in distress, the instinct to help overrides skepticism.

Isolation: Older adults may have less day-to-day contact with family members, making it harder to quickly verify emergencies.

Financial Access: Many older adults have retirement savings, home equity, and established banking relationships—making them lucrative targets.

Shame and Silence: Victims often don’t report scams due to embarrassment or fear of appearing incapable of managing their affairs. The actual losses are estimated to far exceed reported figures.

Gary Schildhorn’s Story
#

Philadelphia attorney Gary Schildhorn was driving to work when his phone rang. The voice on the other end was unmistakably his adult son Brett—crying, panicked, saying he’d been in a car accident, broken his nose, hurt a pregnant woman, been arrested, and was in jail.

“There was no doubt in my mind that it was his voice on the phone—it was the exact cadence with which he speaks,” Schildhorn testified before the Senate. “I sat motionless in my car just trying to process these events.”

A fake “attorney” then called, demanding $9,000 in cryptocurrency for bail. Schildhorn was moments from sending the money when he called his daughter-in-law to check. Then Brett FaceTimed: “Dad, my nose is fine. I’m fine. You’re being scammed.”

Schildhorn contacted local police and the FBI. Because he hadn’t actually lost money, they told him they were unable to act. He went public with his story, and other victims reached out—many who hadn’t been so lucky.

His Senate testimony crystallized the legal vacuum facing victims: “It’s fundamental if we’re harmed by somebody, there’s a remedy either through the legal system or through law enforcement. In this case, there is no remedy, and that fundamental basis is broken.”

The Legal Landscape: Limited Remedies#

Unlike other AI harms where victims have emerging legal options, voice cloning fraud exists in a legal gray zone with fragmented and often inadequate protections.

Federal Law
#

FCC Ruling (February 2024): The FCC declared AI-generated voices in robocalls illegal under the Telephone Consumer Protection Act (TCPA). This ruling:

  • Took effect immediately
  • Allows the FCC to fine violators over $23,000 per call
  • Empowers state attorneys general to prosecute
  • Permits victims to sue robocallers using AI
  • Enables blocking of carriers facilitating such calls

Limitations: The FCC ruling targets robocalls specifically. It doesn’t cover live conversations, video calls, or non-telephonic fraud. It provides enforcement tools but doesn’t create easy pathways for individual victims to recover losses.

No Comprehensive Federal Legislation: Despite proposed bills like the NO AI FRAUD Act (introduced January 2024), Congress has not passed comprehensive federal legislation protecting voice rights or providing victim remedies for voice cloning fraud.

State Laws
#

Tennessee ELVIS Act (Effective July 2024): The first state law explicitly addressing AI voice cloning:

  • Adds “voice” (actual or simulated) to protected publicity rights
  • Creates civil cause of action against AI tools used for unauthorized voice replication
  • Criminal penalties: Class A misdemeanor (up to 11 months, 29 days jail; fines up to $2,500)
  • Fair use exemptions for news, commentary, satire, and parody

California: Longstanding publicity rights cover voice (name, image, likeness, and voice—“NIL+V”), but weren’t drafted with AI in mind.

Other States: Approximately 39 states have some form of publicity rights legislation, but most don’t explicitly address AI voice cloning or provide clear remedies for fraud victims.

The Lehrman v. Lovo Decision (2025)
#

A New York federal court case involving voice actors whose voices were cloned without authorization offers guidance on where legal claims succeed and fail:

Claims That Survived:

  • New York Right of Publicity (Civil Rights Law §50-51): Court allowed claims to proceed, interpreting statutes broadly to cover digital voice replicas
  • State Consumer Protection (§349-350): Misrepresentations about licensing rights survived dismissal

Claims Dismissed:

  • Federal Trademark (Lanham Act): Court rejected false association claims, distinguishing voices as “products” rather than “source identifiers”
  • Copyright (Derivative Works): AI-generated clones aren’t derivative works because copyright “protects only the original sound recordings…not the abstract qualities of a voice”

Key Takeaway: State laws provide stronger protections than federal IP statutes for personal identity appropriation through AI—but these protections were designed for commercial misappropriation, not fraud victims trying to recover stolen funds.

The Recovery Problem
#

Even when victims identify fraudsters, recovering money is nearly impossible:

  • Funds are rapidly laundered through money-mule chains and crypto mixers
  • Fewer than 5% of losses to sophisticated vishing scams are ever recovered
  • Scammers often operate from overseas, beyond U.S. jurisdiction
  • By the time fraud is discovered, money has been converted to cryptocurrency and dispersed
  • Banks are generally not liable for authorized transfers, even if induced by fraud

Building a Case: What You Can Do
#

If you’ve been targeted by AI voice cloning fraud:

1. Stop and Verify Immediately
#

Before transferring any money:

  • Hang up and call back using a known number (not one provided by the caller)
  • Use a family code word established in advance for emergencies
  • Video call the person supposedly in distress
  • Contact other family members to verify the emergency
  • Ask personal questions only the real person would know

2. If You’ve Already Been Victimized
#

Act within minutes:

  • Contact your bank immediately to attempt to halt or reverse transfers
  • If cryptocurrency was involved, report to the exchange
  • File a police report—this creates a record even if immediate action isn’t possible
  • Report to the FBI’s Internet Crime Complaint Center (IC3) at ic3.gov
  • File an FTC complaint at reportfraud.ftc.gov

3. Preserve Evidence
#

Document everything:

  • Phone records showing the call
  • Any voicemails or recorded messages
  • Wire transfer receipts and bank statements
  • Screenshots of any text messages or emails
  • Notes on exactly what was said during the call
  • Timeline of events

4. Report to Multiple Agencies
#

  • Local police: For criminal investigation
  • FBI (IC3): For federal tracking of fraud patterns
  • FTC: For consumer protection records
  • State Attorney General: For potential enforcement action
  • Your state’s adult protective services: If an elderly victim was targeted

5. Consult an Attorney
#

While recovery is difficult, an attorney can assess:

  • Whether the scammer is identifiable and reachable
  • Bank liability for failing to flag suspicious transactions
  • State consumer protection claims
  • Insurance coverage (some homeowner’s policies cover certain fraud losses)

6. Consider Psychological Support
#

Voice cloning fraud is emotionally devastating. Victims often experience:

  • Shame and self-blame
  • Anxiety about future calls
  • Damaged trust in technology and relationships
  • Financial stress from losses

Counseling can help process these impacts.

Questions to Ask After Voice Cloning Fraud
#

When investigating your case:

  • How did the scammer obtain the voice sample? (Social media, voicemail, public recordings?)
  • What verification did your bank require before processing the transfer?
  • Did the bank flag the transaction as suspicious? Why or why not?
  • Were there any red flags in the call that were missed in the moment?
  • What jurisdiction did the scammer operate from?
  • Was cryptocurrency involved? Which exchange?
  • Are there other victims of the same scam who might support a collective action?
  • Does your homeowner’s or renter’s insurance cover fraud losses?

Protecting Yourself and Your Family
#

Prevention is far more effective than post-fraud remedies:

Establish a Family Code Word
#

Create a secret word or phrase known only to family members. In any emergency call requesting money, ask for the code word. No code word = hang up and verify.

Limit Voice Exposure Online
#

  • Review privacy settings on social media
  • Consider whether voice recordings (videos, podcasts, voicemails) need to be public
  • Be cautious about voice-enabled devices and services that store recordings

Educate Elderly Family Members
#

  • Discuss voice cloning technology and its capabilities
  • Role-play potential scam scenarios
  • Establish verification protocols for any financial requests
  • Consider setting up bank alerts for large transactions
  • Create a list of trusted contacts to call before sending money

Enable Bank Protections
#

  • Set up alerts for wire transfers and large withdrawals
  • Consider requiring dual authorization for significant transactions
  • Discuss fraud protection options with your bank
  • Review which accounts family members have access to

Be Skeptical of Urgency
#

Scammers create artificial time pressure. Legitimate emergencies can wait 10 minutes for verification. If someone demands immediate payment and says not to tell anyone, that’s a red flag.

The Future of Voice Cloning Liability
#

This area of law is dangerously underdeveloped:

Federal Action Needed: Comprehensive federal legislation—like the stalled NO AI FRAUD Act—would establish baseline protections across all states. Current patchwork laws leave most victims without clear remedies.

Bank Liability Reform: Financial institutions could be required to implement AI-detection technology and additional verification for high-risk transfers. Some advocates argue banks should bear more liability for authorized-push-payment fraud.

Caller Authentication: Technologies like STIR/SHAKEN (which verify caller identity) could be expanded and required, making voice-spoofing calls easier to detect and block.

AI Platform Responsibility: Companies providing voice cloning tools could face requirements to prevent misuse, similar to “know your customer” rules in banking.

Insurance Products: As AI fraud becomes more prevalent, expect new insurance products specifically covering voice cloning and deepfake losses.

Detection Technology: AI detection tools are improving, but remain in an arms race with generation technology. Future phone systems may automatically flag synthetic voices.

For now, victims of AI voice cloning fraud face a harsh reality: the technology to commit these crimes vastly outpaces the legal and technical infrastructure to prevent or remedy them. Until federal and state law catches up, prevention—family code words, skepticism of urgent requests, and verification before any transfer—remains the best protection.

Related Resources#

  • AI Deepfakes - Deepfake pornography and image-based abuse
  • AI Chatbots - AI-caused psychological harm and manipulation
  • Security AI - Surveillance, facial recognition, and privacy
  • Contact Us - Get help understanding your legal options

This information is for educational purposes and does not constitute legal advice. AI voice cloning fraud cases involve complex interactions between federal communications law, state consumer protection statutes, banking regulations, and rapidly evolving technology. Consult with qualified legal professionals to understand your rights.

If you believe you’ve been targeted by fraud, report to the FBI’s IC3 (ic3.gov) and the FTC (reportfraud.ftc.gov) immediately.

Related