USA v. Farris: Professional Responsibility and AI Generative Tools in Legal Practice
Educational Content – Not Legal Advice
This article provides general information. Consult a qualified attorney before taking action.
Disclaimer
This analysis is for educational purposes only and does not constitute legal advice. The information provided is general in nature and may not apply to your specific situation. Laws and regulations change frequently; verify current requirements with qualified legal counsel in your jurisdiction.
Last Updated: April 7, 2026
USA v. Farris: Professional Responsibility and AI Generative Tools in Legal Practice
Introduction: When Generative AI Becomes a Liability
United States v. Farris, decided by the Sixth Circuit Court of Appeals on April 3, 2026, represents a watershed moment in American jurisprudence addressing the ethical obligations of attorneys who deploy generative AI without adequate verification protocols. The opinion, authored per curiam by judges CLAY, GIBBONS, and HERMANDORFER, bypasses the substantive merits of Farris's appeal entirely, focusing instead on sanctioning counsel Steven N. Howe for using Westlaw's CoCounsel platform to draft appellate briefs without verifying the legal authorities cited.
What distinguishes this case is that it does not involve AI hallucinations in the classical sense—fabricated authorities cited as real. Instead, Howe's briefs distort authentic legal authorities, misrepresenting holdings and manipulating quotations to support propositions those cases did not sustain. This variety of error may be more dangerous than pure fabrication because it is harder to detect and more likely to deceive courts and counsel.
I. The Facts: How the Court Discovered AI-Assisted Drafting
The Criminal Conviction and Appeal
John C. Farris was convicted in the U.S. District Court for the Eastern District of Kentucky on drug trafficking charges. At sentencing, the trial judge imposed a two-level enhancement under U.S.S.G. § 3B1.1(c) for Farris's leadership role in the conspiracy. Farris appealed this enhancement, and the Sixth Circuit appointed Steven N. Howe as appellate counsel under the Criminal Justice Act, 18 U.S.C. § 3006A.
The First Red Flag: "CoCounsel Skill Results"
The appellate panel's scrutiny of Howe's briefs began with an innocuous detail: the principal brief's filename was "CoCounsel Skill Results." CoCounsel is Thomson Reuters' proprietary AI platform integrated into Westlaw. This file designation alone prompted the court to investigate whether the briefs had been generated, rather than written, by human counsel.
Three Fatal Citations
Upon substantive review, the court identified three problematic legal citations:
Citation One: The brief cited U.S.S.G. § 3B1.1 cmt. n.1 for the proposition that "[m]ere presence or knowledge of the offense is not sufficient to make a person a participant." The quotation does not appear anywhere in the Commentary.
Citation Two: The brief cited United States v. Washington, 715 F.3d 975 (6th Cir. 2013), for the statement that "simply facilitating the offense without exercising decision-making authority is insufficient." The citation misrepresents the Washington holding—the Sixth Circuit upheld the enhancement in that case, rather than reversing it.
Citation Three: The brief cited United States v. Anthony, 280 F.3d 694 (6th Cir. 2002), claiming the court vacated the enhancement because "[t]here was no evidence [the defendant] directed or supervised anyone else." Although the court did vacate the enhancement in Anthony, it did so on narrow technical grounds regarding counting methodology. Moreover, the defendant in Anthony had conceded his supervisory role, contradicting Howe's citation's implication that no supervisory conduct occurred.
The Perverse Nature of These Errors
These are not fictional authorities. Washington and Anthony are real Sixth Circuit decisions with authentic holdings. Howe's citations do not invent law; they distort it. This hybrid error—citing real cases but misrepresenting their content—is arguably more insidious than classical hallucination. Real cases are more difficult to debunk because they exist; their holdings are less obvious than the non-existence of a fabricated opinion.
II. The Show-Cause Order and Howe's Admission
The Court's Investigation
On February 23, 2026, the Sixth Circuit issued a show-cause order requiring Howe to:
- Provide verified copies of all cited authorities.
- Explain discrepancies between the briefs and the actual authorities.
- Identify the author(s) of the briefs.
- Disclose whether generative AI was used.
- Describe the citation-checking processes.
- Clarify whether AI had been used in district court filings.
Howe's Candid Response
Remarkably, Howe's response was forthright. He admitted:
- He directed a staff member to upload district court documents to CoCounsel.
- He spent six hours revising the AI-generated draft of each brief.
- He was "not familiar" with CoCounsel's functioning.
- The three inaccurate quotations were product of the AI, did not appear in any legal sources, and constituted misrepresentations of Washington and Anthony.
- He accepted full responsibility for failing to verify the output before filing.
Mitigating Arguments (and Why They Failed)
Howe argued for leniency based on:
- First use: This was his first deployment of CoCounsel for appellate briefing.
- Recent acquisition: The firm acquired CoCounsel in August 2025, after district court proceedings concluded.
- Clean record: Thirty-five years of practice with no prior disciplinary history.
The court acknowledged these factors but rejected them as insufficient. The opinion notes that unfamiliarity with a tool is not an excuse; it is evidence of negligence. A lawyer who incorporates new technology into practice bears an heightened, not diminished, obligation to understand how it functions and where it fails.
III. The Applicable Law: Professional Responsibility Rules and Generative AI
Rule 1.1: Competence and Technological Knowledge
ABA Model Rule 1.1 requires attorneys to provide "competent representation," including "the legal knowledge, skill, preparedness and diligence as are reasonably necessary." Comment 8 explicitly mandates that lawyers keep current with "changes in the law and its practice, including relevant technology."
The Sixth Circuit interpreted this to impose an affirmative duty to understand the AI systems one deploys. An attorney cannot claim competence while remaining ignorant of the tools used to produce client work.
Rule 3.3: Candor Toward the Tribunal
Rule 3.3 prohibits presenting false evidence or legal authority to a court. The rule does not require intentional deception; it demands candor—truthfulness. When an attorney presents citations to authorities that the attorney has not verified, knowing the generating tool (AI) is prone to error, that conduct violates Rule 3.3.
As the court emphasizes: "That Howe's briefs cited real legal authorities—as opposed to 'hallucinations' featuring fictitious cases—does not absolve him." Real cases misrepresented are as much a violation as fictitious cases fabricated.
Rule 5.3: Responsibility for Nonlawyer Assistants
Rule 5.3 requires attorneys to "make reasonable efforts to ensure" that nonlawyer assistants' conduct complies with professional obligations. Howe delegated AI output supervision to unlicensed administrative staff. Six hours of personal review, insufficient to catch three grave citation errors, did not cure this delegation.
Formal Opinion 512 and the ABA Task Force on AI
In 2024, the ABA Committee on Ethics issued Formal Opinion 512 addressing generative AI in legal practice. Key recommendations include:
- Understand the tool: Don't deploy AI systems without knowing their limitations and risks.
- Verify output: Never assume AI-generated content is accurate without independent verification.
- Maintain supervision: Delegating AI output to non-lawyers is insufficient; supervision requires lawyer engagement.
- Consider disclosure: Attorneys may need to inform clients of significant AI use.
The 2025 ABA Task Force Report on Law & AI reinforces these principles, noting that technology does not displace professional responsibility; it amplifies it.
IV. Analysis Under the IRAC Framework
Issue
Do attorneys violate professional conduct rules when they deploy generative AI for substantive legal work without implementing adequate verification processes for citations and legal authorities cited?
Rule
Professional responsibility rules, particularly Rule 1.1 (competence), Rule 3.3 (candor), and Rule 5.3 (supervision), establish affirmative obligations that do not evaporate when technology is involved. These rules require:
- Understanding the limitations of tools used.
- Verifying all citations and propositions presented to courts.
- Maintaining personal or qualified lawyer supervision over outputs.
- Complying with traditional duties of diligence regardless of technological mediation.
Additionally, constitutional law (McCoy v. Court of Appeals of Wisconsin, 486 U.S. 429) and contemporary federal jurisprudence (Fletcher v. Experian Information Solutions, Inc., 168 F.4th 231) establish that citation accuracy is not procedural formality but substantive professional obligation.
Application
Violation of Rule 1.1
Howe violated Rule 1.1 by utilizing CoCounsel without understanding its functioning or limitations. The court emphasizes that new technology does not reduce professional obligations; it increases them. An attorney incorporating a powerful generative system into practice has an affirmative duty to educate himself about the system's risks.
Howe's admission that he was "not familiar" with CoCounsel is disqualifying. How can counsel provide competent representation using tools he does not understand?
Violation of Rule 3.3
Howe violated Rule 3.3 by presenting legal authorities to the court without verifying their accuracy. The briefs contained false quotations and misrepresented holdings. The fact that the underlying cases were real does not absolve Howe; if anything, it demonstrates negligence—the authorities existed and should have been verified.
Violation of Rule 5.3
Howe violated Rule 5.3 by delegating AI output supervision to administrative personnel. The Rule requires that supervising attorneys, not paralegals or administrative staff, be responsible for ensuring work product compliance with professional obligations. Six hours of personal review insufficient to identify three grave errors does not satisfy this standard.
Conclusion
Howe's conduct constitutes clear violations of Rules 1.1, 3.3, and 5.3, and implicitly violates principles of constitutional law requiring citation accuracy. The violations were not technical or marginal; they resulted in material misrepresentations of law presented to the appellate court.
V. Sanctions Imposed by the Sixth Circuit
Denial of Compensation
Howe will receive no compensation under the Criminal Justice Act for his appellate work. For appointed counsel, this represents not merely an economic loss but professional reputational damage. The court found the misconduct severe enough to warrant forfeiture of compensation despite Howe's time investment.
Referral to Disciplinary Authorities
The court ordered:
- Notification to the Chief Judge of the Sixth Circuit for potential disciplinary proceedings under Local Rule 46.
- Notification to the Chief Judge of the Eastern District of Kentucky.
- Referral to the Disciplinary Clerk of the Kentucky Bar Association.
This tri-partite notification makes formal bar discipline almost inevitable. Howe faces potential suspension or disbarment.
Removal and Replacement
Howe was removed from representing Farris. The court:
- Appointed new appellate counsel under the Criminal Justice Act.
- Locked the briefs Howe filed to prevent reliance on them.
- Reset the briefing schedule to allow new counsel to file substantive briefs.
VI. Doctrinal Implications: Professional Responsibility in the Age of AI
Competence Now Includes Technological Literacy
Traditionally, Rule 1.1 competence meant substantive legal knowledge. The Farris decision extends this to competence regarding tools. A lawyer must now understand not only the law but the technological systems used to practice it. This represents a significant expansion of what "competence" entails.
Professional Responsibility Cannot Be Technologically Mediated Away
When an attorney uses AI, responsibility does not shift to the AI vendor. Westlaw is not liable for Howe's negligence; Howe bears full responsibility. The intermediation of technology does not create a responsible third party; it increases the attorney's obligation to supervise and verify.
The Risk Hierarchy: Real Cases Distorted May Be Worse Than Fictional Cases Invented
One might expect that citing real authorities, even if misrepresented, is preferable to inventing entirely fictional cases. Farris suggests otherwise. Distorting real holdings is more dangerous because courts are less likely to detect the distortion. The case exists; its actual holding seems merely a matter of interpretation, not fabrication. This makes attorney diligence all the more critical.
Supervision Cannot Be Delegated to Non-Lawyers
Large law firms commonly distribute work among partners, associates, and support staff. Farris establishes that when AI output is involved, meaningful supervision requires lawyer engagement. Delegating to administrative personnel does not satisfy Rule 5.3.
VII. Comparative Jurisprudence: Emerging Pattern of AI-Related Misconduct
The Sixth Circuit references Whiting v. City of Athens, 2026 WL 710568, another recent case involving AI-generated legal error. This suggests a pattern: as AI tools proliferate in legal practice, courts are encountering AI-related violations with increasing frequency. The profession is at an inflection point where traditional rules, written before generative AI existed, must be reinterpreted to address novel risks.
Other professions regulated for competence—medicine, engineering, accounting—are confronting parallel issues. The consensus emerging is that technology does not displace professional judgment; it requires heightened scrutiny.
VIII. Critical Analysis: Was Howe Unfairly Punished?
A balanced assessment must acknowledge the counterargument: Howe relied on a tool marketed by a trusted provider (Thomson Reuters/Westlaw) as suitable for legal professionals. One could reasonably expect such a tool to generate reliable work product. The sanctions imposed—forfeiture of compensation, disciplinary referral, removal from the case—may be disproportionate for a first-time transgression.
The court's response (implicit in the opinion) is that trust in the provider does not displace attorney diligence. Because Westlaw is trusted, the obligation to verify output does not decrease; it remains absolute. Furthermore, Howe admitted unfamiliarity with the tool, suggesting he deployed it recklessly without understanding its functioning. An attorney cannot claim surprise at errors from a system he chose not to understand.
Whether this balance is just remains open to reasonable disagreement. What is clear is the doctrinal trajectory: courts will not tolerate negligent reliance on AI in contexts where legal accuracy determines outcomes affecting liberty, property, or rights.
IX. Practical Implications and Recommended Protocols
For Individual Attorneys
-
Do not use generative AI for substantive legal work without understanding the tool. Generative AI is prone to systematic errors, including hallucinations and distorted citations. If you must use it, implement verification protocols more rigorous than manual review.
-
Verify every citation personally. Not through staff review or spot-checking, but through personal verification of each authority presented to a court.
-
Disclose material AI use to clients. Consider whether informed consent is necessary when significant AI is deployed in client work.
-
Obtain training on AI systems you deploy. "Unfamiliarity" is negligence, not an excuse.
For Law Firms
-
Develop firm-wide AI policies. Specify which tools can be used for which tasks, what verification is required, and who bears responsibility for output.
-
Assign qualified attorney supervision. AI output cannot be supervised by paralegals or administrative staff in contexts involving court filings.
-
Implement QA protocols. Use citation-checking software, secondary verification, and systematic review before filing documents with courts.
-
Provide recurring training. Keep all attorneys current on AI capabilities and limitations.
For Courts and Bar Associations
-
Maintain heightened scrutiny of briefs with AI indicators. File names like "CoCounsel Skill Results" or suspicious citation patterns warrant investigation.
-
Consider requiring disclosure. Some jurisdictions may require attorneys to disclose material use of generative AI in court filings.
-
Coordinate inter-jurisdictional standards. Develop consistent ethical guidance rather than allowing fragmented approaches.
For AI Tool Providers
-
Improve accuracy warnings. Current disclaimers are insufficient. Clearly state that legal output requires verification.
-
Integrate citation validation. Build features that automatically verify quotations against source documents.
-
Develop audit trails. Allow courts and bar associations to inspect how briefs were generated.
X. Conclusion: Professional Responsibility in Transition
USA v. Farris marks a turning point. As generative AI becomes ubiquitous in legal practice, professional responsibility rules—written for a pre-AI era—must be reinterpreted to address new risks. The court's holding is straightforward: the traditional obligations of competence, candor, and supervision do not evaporate because technology is deployed. If anything, they intensify.
For attorneys, the lesson is stark: understand the tools you use, verify every citation presented to courts, and maintain personal oversight of AI-generated work product. For law firms, the imperative is to develop institutional controls ensuring that technological power is matched by professional diligence. For courts and bar associations, the challenge is to maintain professional standards while the profession itself is undergoing technological transformation.
The question is not whether AI will continue advancing in legal practice. It will. The question is whether the profession can adapt its ethical frameworks quickly enough to prevent the kind of harm Farris suffered—delayed justice in a proceeding where he was already represented by counsel with limited resources. Farris suggests that courts will no longer tolerate negligent reliance on AI as an acceptable shortcut.