Decision&LawAI Legal Intelligence
jurisprudenceprofessional-responsibility

Medal v. Amazon: Platform Liability for AI-Generated Content

James Okafor
March 28, 2026
14 min read
3,200 words
platform liabilitySection 230generative AIAmazontort lawRule 11

Educational Content – Not Legal Advice

This article provides general information. Consult a qualified attorney before taking action.

Disclaimer

This analysis is for educational purposes only and does not constitute legal advice. The information provided is general in nature and may not apply to your specific situation. Laws and regulations change frequently; verify current requirements with qualified legal counsel in your jurisdiction.

Last Updated: March 28, 2026

Originally published in Spanish on derechoartificial.com. Adapted for the US audience by James Okafor.

Decided

Medal v. Amazon.com Services, LLC

Case No. 2:23-cv-01975-JHC
W.D. Wash.
February 27, 2026
AI-Assisted Legal Research

Key Issue

Platform liability for AI-generated citations; Rule 11 sanctions; primary jurisdiction doctrine

Key Takeaways

  • Federal courts are actively addressing AI-generated fake citations in court filings, establishing precedent for professional responsibility in the age of generative AI.

  • Rule 11 sanctions apply when attorneys fail to verify AI-generated legal citations in primary sources before filing—delegating research to AI without supervision is not a defense.

  • The primary jurisdiction doctrine does not shield defendants from litigation merely because an agency has announced future regulatory reforms.

  • The presumption against retroactivity bars defendants from using anticipated regulatory changes as grounds for staying proceedings.

  • Courts are developing emerging standards requiring disclosure of AI use in legal research and mandatory verification protocols.

Introduction

On February 27, 2026, Judge John H. Chun of the United States District Court for the Western District of Washington issued a significant ruling in Medal v. Amazon.com Services, LLC, Case No. 2:23-cv-01975-JHC. The case presents two issues of increasing practical significance for US legal professionals.

First, the court addressed the use of unsupervised large language models (LLMs) in preparing court filings and the professional and procedural responsibilities that follow. Second, the court analyzed the limits of the primary jurisdiction doctrine as a mechanism for staying consumer protection litigation pending future regulatory reforms. The court denied Amazon's motion to stay, concluding that neither the Syntek factors nor efficiency considerations warranted deference to the FDA.

Parallel to this ruling, the court warned—without imposing sanctions at this stage—that it would address separately the inclusion in the complaint of a non-existent legal citation generated by an LLM. This warning signals that courts nationwide are developing frameworks for addressing AI-generated content in legal practice.


Background of the Case

The Parties and Claims

Medal represents a class of consumers who purchased dietary supplements through Amazon's platform. The plaintiffs allege that Amazon violated federal food labeling requirements under the Dietary Supplement Health and Education Act (DSHEA) by failing to include required health notices "on each panel or page" of supplement labels as mandated by 21 C.F.R. § 101.93(d).

The AI-Generated Citation

Plaintiffs' counsel cited "21 U.S.C. § 343(r)(6)(D)" as the statutory basis for the "prominent display" requirement. The court found that this subsection does not exist. In a separate filing (Dkt. #125), counsel acknowledged that the non-existent citation resulted from "insufficiently supervised use of a large language model." The court characterized the inclusion of this apparent hallucination as "unacceptable" and announced it would address the matter in a separate order.

This incident follows the precedent established in Mata v. Avianca, Inc., 678 F. Supp. 3d 443 (S.D.N.Y. 2023), where two attorneys were sanctioned under Federal Rule of Civil Procedure 11 for incorporating entirely fabricated case citations generated by ChatGPT into court filings.


Legal Framework: Professional Responsibility and AI

Rule 11 and the Duty of Verification

Federal Rule of Civil Procedure 11(b) requires that an attorney who signs a filing certify that, "to the best of the person's knowledge, information, and belief formed after an inquiry reasonable under the circumstances," the legal contentions "are warranted by existing law or by a nonfrivolous argument for extending, modifying, or reversing existing law."

This duty of reasonable inquiry is not satisfied by delegating legal research to an AI tool without subsequent verification against primary sources. The ABA's Formal Opinion 512 (2023) on generative AI directly addresses this issue. While the opinion does not prohibit LLM use, it establishes that the professional competence required by Model Rule 1.1 obligates attorneys to understand the material risks of any technology they use—including the known tendency of generative AI systems to produce plausible but factually incorrect legal citations.

Supervision of AI-generated results is not merely a best practice—it is an integral component of the duty of competence. In Medal, the violation was aggravated by the fact that the non-existent subsection was cited repeatedly throughout the operative complaint. The court noted, with some leniency, that the error did not affect its analysis on the merits. This mitigating circumstance is specific to this case and should not be interpreted as a signal that LLM hallucinations in court filings are harmless.

The Primary Jurisdiction Doctrine

Primary jurisdiction is a prudential doctrine that allows federal courts to refer issues to an administrative agency with specialized regulatory authority when doing so would serve the goals of uniform interpretation and expert analysis. Syntek Semiconductor Co. v. Microchip Tech. Inc., 307 F.3d 775 (9th Cir. 2002), identifies four relevant factors:

  1. Whether a previously undecided question of law must be resolved
  2. Whether Congress has committed the issue to the agency's jurisdiction
  3. Whether the issue arises within a comprehensive regulatory scheme
  4. Whether specialized expertise or uniformity of application is required

Astiana v. Hain Celestial Grp., Inc., 783 F.3d 753 (9th Cir. 2015), adds that efficiency is the decisive factor: the doctrine should not apply when it would result in "unnecessary delay in resolving claims."


Court's Reasoning

On the AI-Generated Citation

The court declined to impose sanctions at this stage but expressly reserved the issue for separate resolution. The warning signals that courts are developing frameworks for addressing AI-generated content in legal practice, including potential:

  • Disclosure requirements for AI use in legal research
  • Certification requirements by signing attorneys
  • Economic sanctions for supervision failures
  • Referrals to state bar associations

Several districts have issued standing orders requiring disclosure when AI tools are used in preparing court filings (N.D. Tex., D. Colo., among others).

On Primary Jurisdiction

Factors 1 & 2: The court concluded that the first two Syntek factors counsel against a stay. The central task—comparing the text of 21 C.F.R. § 101.93(d) with the labels as they appeared on Amazon's platform—is an ordinary judicial function. Ninth Circuit courts routinely perform this comparison without agency involvement.

Factor 3: Conceded. Dietary supplement labeling falls within the FDA's comprehensive regulatory framework.

Factor 4: The court rejected Amazon's argument that the imminent reform of § 101.93(d) would create a conflict requiring agency intervention. Under the presumption against retroactivity, Landgraf v. USI Film Prods., 511 U.S. 244 (1994), federal statutes and regulations do not apply retroactively absent express Congressional authorization. The FDA's December 2025 letter announcing its intent to begin rulemaking did not indicate that any new regulation would apply retroactively.

Efficiency: The case has been pending since 2023. Without a precise timeline for FDA rulemaking—and given that APA rulemaking typically extends over years—a stay would impose substantial delay on plaintiffs' already-accrued claims.


Implications for Legal Professionals

For Litigation Attorneys

On AI-Assisted Research: Any legal citation—whether normative or jurisprudential—generated with AI assistance must be independently verified against primary authoritative sources (Westlaw, Lexis, or the official U.S. Code repository) before inclusion in any court filing. LLM output is a starting point for research, not a deliverable ready for filing.

On Supervision Protocols: Law firms that have adopted AI-assisted research workflows must implement formal quality control protocols requiring validation by a licensed attorney—not a paralegal or unsupervised associate—of the existence and accuracy of all citations before filing.

On Rule 11 Exposure: Courts are increasingly willing to impose economic sanctions, disclosure requirements, and in severe cases, bar referrals for AI-related filing errors. Defense counsel should advise clients that improper AI use creates litigation risk that may exceed any procedural advantage gained.

For Corporate Counsel

On Compliance: Until 21 C.F.R. § 101.93(d) is formally modified through notice-and-comment rulemaking, the "each panel or page" requirement remains enforceable. Companies should not interpret agency signaling as a compliance exemption.

On Regulatory Monitoring: The Medal ruling signals that courts will not stay proceedings merely because an agency has announced regulatory reform intentions. Corporate compliance programs must continue to operate under current standards regardless of anticipated changes.


What Medal v. Amazon Means for Practice

For AI users in legal research: The Medal warning is a watershed moment. Courts are no longer treating AI hallucination as a footnote issue—they are developing formal frameworks for accountability.

For defense attorneys: The primary jurisdiction doctrine provides no shield against litigation based on anticipated regulatory changes. Current compliance obligations remain in force until formally modified.

For the profession: The ABA Opinion 512 framework—understanding AI risks as a component of competence—is becoming the baseline standard. Firms should formalize AI supervision protocols before courts impose mandatory requirements.


Related Coverage

Legal Citation

Medal v. Amazon.com Services, LLC, No. 2:23-cv-01975-JHC, W.D. Wash. (2026-02-27) (Order Denying Motion to Stay) (Docket No. Dkt. #128)

Case Name:Medal v. Amazon.com Services, LLC
Case Number:No. 2:23-cv-01975-JHC
Court:W.D. Wash.
Date:2026-02-27
Document:Order Denying Motion to Stay
Docket No.:Dkt. #128

About the Author

James Okafor is a specialist in regulatory risk analysis and automated compliance. A former fintech regulatory consultant, he now writes about how AI systems interpret complex regulatory frameworks. Based between London and Lagos.


This analysis is based on publicly available court filings and does not constitute legal advice.

Related Coverage

mata-v-avianca-ai-hallucinations-sanctions

Read analysis

jane-doe-v-xai-grok-ncii-section-230-ai-liability

Read analysis
Back to News