Decision&LawAI Legal Intelligence
case-lawjudicial-interpretation

Lehrman v. Lovo: Voice Cloning Without Consent Violates New York Identity Law

James Okafor
April 12, 2026
15 min read
3,088 words
voice cloningNYCRL §50-51identity misappropriationAI liabilitystate law enforcementsynthetic media

Educational Content – Not Legal Advice

This article provides general information. Consult a qualified attorney before taking action.

Disclaimer

This analysis is for educational purposes only and does not constitute legal advice. The information provided is general in nature and may not apply to your specific situation. Laws and regulations change frequently; verify current requirements with qualified legal counsel in your jurisdiction.

Last Updated: April 12, 2026

LEGAL INTELLIGENCE ALERT

Lehrman v. Lovo, Inc. (S.D.N.Y., July 10, 2025)

Classification: AI Voice Cloning Liability | NYCRL §50-51 Exposure | State Law Enforcement Risk


BOTTOM LINE UP FRONT

A federal trial court in New York ruled that synthetic voice cloning without consent violates New York Civil Rights Law §50-51 (voice misappropriation statute), not copyright or trademark law. The decision rejects copyright expansion but enables state-level identity protection that bypasses the intellectual property framework entirely.

For AI developers, voice actors, and platforms: This creates strict liability without intent requirement under NYCRL §50-51 for any voice clone used commercially in New York without explicit written consent for that specific use. The ruling applies narrowly to New York but signals enforcement-ready doctrine that other states (California, Massachusetts, Illinois) will likely adopt.

Immediate risk: If your AI system includes voice synthesis, voice cloning, or voice-based content generation, and you trained on voice data without explicit consent or sold clones commercially, you face NYCRL §50-51 exposure in New York and probable multistate exposure within 18-24 months.


EVENT

What happened: Paul Lehrman and Linnea Sage, professional voice actors, were contacted by Lovo, Inc. via Fiverr in 2019-2020 with offers to record scripts for "research purposes only" and "internal test scripts for radio ads." They agreed. Lovo paid them ($1,200 and $400 respectively) and obtained their voice recordings. Months later, Lehrman heard what sounded like his own voice narrating a podcast produced by MIT. Investigation revealed Lovo had trained a voice-cloning AI model (called "Genny") using their recordings, generated synthetic clones under fictional names ("Kyle Snow" for Lehrman, "Sally Coleman" for Sage), and commercialized those clones to paying subscribers without any additional authorization.

Lovo's defense: The company argued that (1) copyright law didn't apply because voice clones are "independent fixations" of sound (not copies), (2) trademark law didn't apply because actors' voices are services, not source identifiers, and (3) whatever rights existed were licensed under Fiverr's standard Terms of Service.

Court's ruling: The court rejected all three defenses but—critically—did not expand copyright or trademark law. Instead, it resurrected a 1903 New York statute designed to protect people's faces in photographs and applied it to voice clones.

Procedural posture: Motion to dismiss (Rule 12(b)(6)). The court allowed claims to proceed under NYCRL §50-51, state consumer protection law (GBL §349-350), and breach of contract. It dismissed copyright and Lanham Act claims.

Timing and relevance: This is the first major federal ruling on AI voice cloning liability. It breaks the deadlock between copyright (which does not protect voices as abstract entities) and state identity law (which does). The decision came exactly as generative AI voice platforms were scaling commercialization.


LEGAL ARCHITECTURE

Governing regime: NYCRL §50-51 (1903 statute prohibiting use of a person's name, portrait, picture, likeness, or voice for advertising or trade purposes without written consent).

Key precedent: Lohan v. Take-Two Interactive Software, Inc., 31 N.Y.3d 111 (2018)—established that computer-generated digital avatars fall within NYCRL §50-51 protection. The Lehrman court extended this logic from visual avatars to audio clones.

Statutory interpretation: The court read "voice" in NYCRL §50 to cover any recognizable audio representation, whether original or synthetic, if it is (1) identifiable as the plaintiff, (2) used in commerce or advertising, (3) without written consent, and (4) within New York.

Standard of proof: For a motion to dismiss, plaintiffs must plead facts that make the claim plausible. Here, they did: they alleged Lovo obtained voice recordings under false pretenses (for "research only"), used them to train a commercial AI, and distributed the resulting clones commercially. That suffices.

Statute of limitations: NYCRL §51 carries a 1-year limit, but the court applied a "republication" exception: each time Genny generates a new synthetic audio clip, it constitutes a fresh "publication" of the misappropriation. This potentially extends liability indefinitely as long as clones remain in use.

No intent requirement: The statute is strict liability. Lovo's good faith belief that it was permitted, or lack of knowledge that cloning would violate the law, is irrelevant.


PARSED ANALYSIS

What the Court Actually Did

The court performed a categorization choice, not an expansion of existing law:

  1. Copyright rejected: § 114(b) of the Copyright Act explicitly excludes "independent fixations" of sound—even if indistinguishable—from infringement. A voice clone, no matter how perfect, is a new recording, not a copy of the original. The court refused to reinterpret this.

  2. Trademark rejected: Section 43(a)(1)(A) of the Lanham Act requires a "distinctive mark" that serves as a source identifier. Voice actors' voices function as services they sell, not brands they use to signal the source of goods. Unlike a celebrity whose fame is the product, a voice actor's fame pays for time and skill.

  3. NYCRL §50-51 accepted: The court read this state statute as protecting identity attributes (including voice) from commercial misappropriation, separate from and prior to copyright and trademark concerns. A 1903 statute turned out to be an adequate hook for 2025 technology.

The Operative Holding

NYCRL §50-51 creates a right of voice identity that applies when:

  • The voice is recognizable as belonging to a specific person.
  • It is used for advertising or commercial purposes.
  • No written consent was obtained.
  • The use occurred in New York.

This right operates independently of copyright status. You can comply with copyright law (because the AI output is not a copy) and still violate NYCRL §50-51 (because the voice is identifiable and commercialized without consent).

Signal for Future Litigation

This decision does not ban voice cloning. It conditions it on explicit written consent for each intended use.

More importantly, it establishes a framework that other states will likely adopt:

  • California already has Civil Code §3344 (right of publicity), which is broader than NYCRL.
  • Massachusetts, Illinois, and New York have similar statutes.
  • Federal courts will likely recognize state-law claims even when copyright and Lanham Act fail, creating a "trap door" liability for platforms.

The decision also signals that consent cannot be implicit or inferred from silence. The fact that Lehrman and Sage consented to voice recording does not constitute consent to cloning, commercialization, or open-ended AI training. Consent is specific to the stated purpose.

The Narrow Unsettled Issue

Fair use as a defense to NYCRL §50-51: The court did not address whether fair use (editorial, satirical, educational use) can serve as a defense to a §50-51 claim. This remains open and will be outcome-determinative in cases involving parody, documentary, or transformative uses of cloned voices.


RISK LAYER

Litigation Probability

HIGH to INEVITABLE for any company currently offering voice cloning, voice synthesis, or text-to-speech products that use training data obtained without explicit per-use consent.

Lehrman v. Lovo removes the legal barrier (copyright) that previously made such claims fail at motion-to-dismiss stage. Now they survive. That invites litigation.

Liability Exposure

Multi-plaintiff to class-wide.

  • Each voice actor whose voice was cloned without consent = one §50-51 violation per use.
  • Class action risk is high: voice actors can allege they form a class of persons whose voices were stolen for AI training.
  • Damages include statutory damages ($250-$750 per violation) + actual damages + injunctive relief.

Enterprise exposure: If your company has cloned 50 voices without explicit consent, and each clone has generated 100 uses (in ads, tutorials, customer deployments), that's 5,000 potential violations. Under New York's single-publication rule with republication exception, liability could accumulate.

Enforcement Priority

YES. State attorneys general (especially New York, California, Massachusetts) are prioritizing AI-related consumer deception. The NY AG's office has been investigating AI voice cloning specifically. This decision will accelerate enforcement.

Expect:

  • Cease-and-desist letters from state AG offices within Q3 2025.
  • Demand for immediate removal of non-consented voice clones.
  • Possible fines for deceptive advertising (claiming "full commercial rights" when state law prohibited certain uses).

Regulatory Trajectory

TIGHTENING rapidly.

  • The AI Act (EU Regulation 2024/1689) does not explicitly cover voice cloning but requires GDPR compliance for training data. This ruling aligns with that trajectory.
  • US federal legislation is being drafted (e.g., proposed bills in Congress on "synthetic media" and "deepfakes"). State courts are moving faster than Congress.
  • Expect state-level voice-protection statutes in California, Massachusetts, and Illinois within 12-18 months, copying the NYCRL logic.

STRATEGIC IMPLICATIONS

For AI Developers and Platforms

Immediate compliance gap: If your voice synthesis product relies on training data obtained without explicit consent for each use case (commercial, educational, internal), you have a legal problem in New York and probable exposure in 5+ other states within 18 months.

Product architecture decision: You must choose between:

  1. Consent-first model: Obtain explicit, per-use written consent for every voice clone you create. Operationally difficult but legally safe.

  2. Synthetic-voice-only model: Train models on synthetic voices (generated data), licensed voice talent (with broad commercial licenses), or public-domain voices (if any exist). More expensive but avoids §50-51 exposure.

  3. Jurisdiction-limited model: Geofence your cloning services to exclude New York and other high-liability states. Operationally complex and reputationally risky.

  4. Indemnity-backed model: Offer cloning services but require customers to warrant they own or have consented to the voice. Pass liability to end users. This creates contractual risk transfer but doesn't eliminate platform exposure to claims from the original voice actors.

Training data audit: Review all voice data used to train your models. If any was obtained without explicit written consent stating "training for commercial voice cloning," you have remediation work:

  • Remove non-consented voices from model weights (expensive, may require retraining).
  • License voices retroactively (if actors will agree).
  • Disable clones derived from non-consented sources.

For Voice Actors and Talent

Leverage point: If your voice was used for AI training without explicit consent, you now have a viable claim. This changes negotiating power. You can demand:

  • Removal of existing clones.
  • Licensing fees for past use.
  • Explicit, separate consent for future use (not bundled with general "commercial use" licenses).

Contractual hygiene: When licensing your voice, require:

  • "This consent does not authorize AI training or voice cloning."
  • "Any use for synthetic voice generation requires separate written consent."
  • "Each distinct use case (commercial, educational, internal) requires separate authorization."

For In-House Legal Teams

Audit your AI products and partnerships:

  • Do any of your vendors (Lovo, AI voice platforms, speech-to-text services) use voice cloning?
  • Did you or your vendors obtain voice data with consent for that specific use?
  • Are you making representations to customers about "full commercial rights" to cloned voices?

If yes to any of these, you have regulatory and litigation risk.

Consumer-facing disclosure: If you sell or use cloned voices, you must:

  • Disclose to end users that clones may violate state law in certain uses.
  • Represent the scope of legal use (e.g., "not permitted for advertisement in New York without additional consent").
  • Implement geographic restrictions if necessary.

This mimics how platforms handle COPPA (Children's Online Privacy Protection), TCPA (Telephone Consumer Protection Act), and GDPR: built-in legal constraints in the product.

For Competitors of Lovo

Differentiation opportunity: If you are competing against Lovo or similar platforms, you can credibly claim:

  • "We only use voice talent with explicit consent."
  • "We generate synthetic voices without copying existing voices."
  • "We comply with state voice-protection laws."

This becomes a sales argument. Customers will care about legal risk.

For Investors in AI Voice Platforms

Due diligence red flag: If the company's voice clones are derived from non-consensual training data or are commercialized without explicit per-use licensing, the company has contingent liability that may not appear on the balance sheet.

Valuation impact:

  • Litigation settlement exposure could be $5M-$50M per company depending on the number of clones and uses.
  • Injunctive relief (forced removal of clones) could reduce product functionality by 30-70%.

Request representations that all voice data was obtained with consent and that all commercial uses are properly licensed.


DECISION OUTPUT

Immediate Action (Next 30 Days)

  1. Audit your voice data sources: Document every voice recording used to train any AI model. For each, identify:

    • Source (Fiverr, direct hire, public dataset, synthetic, licensed library).
    • Consent language (what did the actor agree to?).
    • If unclear, assume non-consent.
  2. Categorize your exposure:

    • Green (fully consented): No action needed.
    • Yellow (ambiguous consent): Flag for licensing or remediation.
    • Red (no consent or deceptive consent): Plan removal or retroactive licensing.
  3. Notify your insurance broker: Errors & Omissions (E&O) or Cyber Liability carriers should be aware of NYCRL §50-51 exposure. Some policies exclude IP claims; others may cover this as "personal injury" (privacy/publicity). Clarify coverage.

  4. Prepare a customer communication (draft, do not send yet): If you sell voice clones or voice synthesis, you will eventually need to disclose the legal constraint. Prepare the message now so you can deploy it quickly when regulators or customers demand it.

Monitoring Item (Next 90 Days)

  • Watch for regulatory response. The New York Attorney General will likely issue guidance on NYCRL §50-51 and AI voice cloning. Expect this by Q4 2025.
  • Track state-level legislation. California and Massachusetts legislatures are moving on synthetic media bills. These could codify or expand the Lehrman logic.
  • Follow appellate activity. Lovo will likely appeal or Lehrman will settle. Settlement language matters: if it includes broad admissions or damage schedules, it becomes a precedent for future plaintiffs.

Decision Point (Next 6 Months)

Make a product architecture choice by EOQ 2025:

Option A (Consent-first):

  • Feasible if you are building a new product or have few deployed clones.
  • Cost: High (operational overhead of consent management).
  • Risk: Low (defensible under NYCRL §50-51).
  • Timeline: 6-12 months to implement.

Option B (Synthetic-only):

  • Feasible if you can regenerate your training data using synthetic voices or licensed talent.
  • Cost: Moderate to high (model retraining).
  • Risk: Low (no identity claims because no real voices).
  • Timeline: 3-6 months.

Option C (Jurisdiction-limited):

  • Feasible as a tactical holding pattern while you litigate or license.
  • Cost: Low (engineering change).
  • Risk: Moderate (customers and regulators in excluded states will object; reputational risk).
  • Timeline: Immediate.

Option D (Indemnity transfer):

  • Feasible if your customer base accepts legal risk transfer.
  • Cost: Moderate (contracting overhead, possible premium for assuming risk).
  • Risk: Moderate to high (you remain exposed if customer indemnity is uncollectible or invalid).
  • Timeline: 2-3 months.

Do not choose Option D without legal review of your customer contracts and insurance coverage.

What Would Change My View

  1. Federal legislation preempting NYCRL §50-51 for AI voice synthesis. Unlikely but possible if Congress acts before states expand. This would eliminate the liability framework Lehrman established.

  2. Fair use as a complete defense to §50-51. If a court rules that fair use (parody, educational, documentary) is a blanket defense to NYCRL §50-51, the exposure narrows significantly. Lehrman left this open.

  3. Retroactive licensing becoming standard. If voice actors become willing to license retroactively (for a fee), the damages picture improves. Right now, most will refuse, making remediation expensive.

  4. Lovo appeal success on state law preemption grounds. If Lovo wins an appeal arguing federal IP law preempts state identity law, the entire Lehrman framework collapses. This is a 15-20% probability event.


COUNSEL NOTES

For Outside Counsel

You will see a surge in AI voice cloning disputes. Prepare for:

  • §50-51 claims in New York federal court and state court.
  • Similar state-law claims in California (§3344), Illinois (recognized common law), Massachusetts.
  • Class certification motion if the client has cloned 10+ voices without consent. This will likely succeed.
  • Summary judgment defense will be difficult because consent (or lack thereof) is often factual.
  • Damages calculation will center on number of clones × number of commercial uses × statutory damages (often $250-$750 per violation in state-law claims).

Negotiation point: If you represent Lovo or a similar defendant, push hard for:

  • Narrow class definition (not all non-consented voices, only commercially distributed clones).
  • Injunctive relief limited to removal, not business model restructuring.
  • Clear settlement language (avoid admissions that invite regulatory follow-on).

For In-House Counsel

You are the bottleneck. Make sure your product, engineering, and compliance teams understand:

  • Consent is use-specific. Consent to "record my voice" ≠ consent to "train an AI on my voice" ≠ consent to "commercialize clones of my voice."
  • Fiverr's Terms of Service do not override NYCRL §50-51. Platform ToS cannot contract away statutory rights.
  • Strict liability means intent is irrelevant. You cannot argue "we didn't know" or "we acted in good faith."
  • State-law claims survive federal-law defenses. Just because copyright permits something doesn't mean state law will.

Document your remediation steps. If you catch a compliance gap and fix it, document the fix. If litigation later arises, that demonstrates responsible behavior and may inform damages.

For Regulatory Counsel

Expect FTC and state AG activity:

  • FTC: May open an investigation into whether companies making "full commercial rights" representations are engaging in unfair or deceptive practices. The FTC focuses on consumer harm, and customers of Lovo who used cloned voices without realizing the legal exposure could be seen as harmed.

  • NY AG: Will likely issue guidance on NYCRL §50-51 and AI. This may include a safe harbor if companies obtain consent, or a blacklist of common violations.

  • State AGs generally: California AG, Massachusetts AG are interested. Expect coordination.

Proactive step: Consider filing an amicus brief or comments if regulators seek guidance on NYCRL §50-51 and voice cloning. Industry participation in regulatory development is critical.


CONCLUSION

Lehrman v. Lovo reframes voice cloning liability from an intellectual property problem to an identity/privacy problem. Federal IP law was never designed to protect abstract voice characteristics; state identity law was. The court took the path of least institutional resistance and let state law handle it.

This creates real liability for AI voice platforms, immediate pressure to audit training data, and strong incentives to redesign around consent and synthetic voices. It is not a novel or surprising ruling, but it is outcome-determinative for the voice-cloning industry.

The clock is running. You have 6 months to make a product and compliance decision. After that, regulators and plaintiffs will force your hand.


Classification: AI Voice Cloning | NYCRL §50-51 | State Law Enforcement | Class Action Risk
Jurisdiction: New York (primary); California, Massachusetts, Illinois (secondary within 12-18 months)
Date of Decision: July 10, 2025
Forum: United States District Court, Southern District of New York
Judge: J. Paul Oetken

Back to News