Agentic AI and Legal Ethics: The New Professional Perimeter
Educational Content – Not Legal Advice
This article provides general information. Consult a qualified attorney before taking action.
Disclaimer
This analysis is for educational purposes only and does not constitute legal advice. The information provided is general in nature and may not apply to your specific situation. Laws and regulations change frequently; verify current requirements with qualified legal counsel in your jurisdiction.
Last Updated: April 26, 2026
When the Agent Acts Alone: Agentic AI and the New Perimeter of Legal Ethics
The legal profession has always delegated. To associates, to paralegals, to researchers, to expert witnesses. What it has never done — until now — is delegate to a system that acts, reasons, and executes without human intervention between each step. The emergence of agentic artificial intelligence in legal practice is not a quantitative extension of legal software use: it is a qualitative leap that displaces the axes on which professional deontology rests.
This analysis does not aim to reproduce the catalog of risks already circulating in the emerging literature. It aims at something harder: identifying which of those risks genuinely alter the normative structure of professional liability — and which are simply variations on problems law has been solving for decades. The central thesis is that agentic AI — unlike generative AI — does not merely create new ways to breach existing obligations. In specific scenarios, it makes compliance with some of them structurally impossible without prior reconfiguration of the contractual and technical framework of representation.
The analysis builds on Michael D. Murray's Algorithmic Ethics in an Era of Agentic AI Advocacy (2026), the most comprehensive academic systematization published to date on AI's impact on the Model Rules of Professional Conduct and the Model Code of Judicial Conduct. Although the normative scaffolding is Anglo-American, the structural tensions Murray describes translate directly into the European context, where the AI Act, the GDPR, and bar association codes converge with equal — and arguably greater — regulatory intensity.
Technological Competence as a Dynamic Obligation, Not a Declaration
There is an intellectually comfortable temptation: to treat technological competence as a static knowledge requirement that is demonstrated once and periodically refreshed through a continuing education course. Generative AI already strained that conception. Agentic AI destroys it.
Murray formulates this with surgical precision: a lawyer competent in the use of generative AI tools in 2024 may be declared incompetent in 2026 if they have not incorporated into their practice an operational understanding of the agentic systems that by then constitute the sector standard. Competence is not a threshold — it is a function of time and the state of the art.
What this means in concrete terms is more uncomfortable than it appears. Understanding agentic AI does not mean knowing that it exists and that it can make mistakes. It requires understanding its operational logic: how an agent decomposes an objective into subtasks, what data sources it consults autonomously, what other systems it interacts with, what parameters define the limits of its delegated authority, and — crucially — under what conditions its reasoning chain can diverge without the human operator detecting it. Deploying an agent without that knowledge is not technical negligence: it is a deontological violation per se, regardless of the result the agent produces.
In the European context, this requirement has a direct normative anchor. The AI Act classifies numerous legal applications as high-risk systems, triggering obligations of human oversight, traceability, and transparency that cannot be satisfied without the technical knowledge to which Murray refers. The lawyer who deploys an AI agent in litigation without understanding its architecture not only risks professional malpractice — they may be contributing to non-compliance with regulatory obligations that, by virtue of the AI Act's liability chain, are partially attributable to them as the system's deployer.
The operational distinction Murray draws between "non-privacy-protecting" systems (which learn by default from user data) and "enterprise" or "closed" systems (which contractually guarantee non-use of data for training) is not an auxiliary technical datum. It is a condition of validity for compliance with the duty of confidentiality. A lawyer who enters identified client data into consumer-grade ChatGPT is not taking a calculated risk: they are breaching professional secrecy without knowing it — which is no different, in terms of wrongfulness, from breaching it with full awareness.
The Duty of Candor in the Face of Systematic Hallucination
No problem concentrates more discipline and case law in Murray's analysis than hallucinations: the tendency of large language models to generate content that is linguistically impeccable, contextually plausible, and factually nonexistent. The cases he cites are well known: Mata v. Avianca (2023), the sanctions against K&L Gates and Morgan & Morgan in 2025. But his legal reading goes further than disciplinary chronicle.
What is truly relevant is not that the sanctioned lawyers made a serious error. It is that the mechanism producing that error — trusting the model's output without independent verification — is structurally identical to the mechanism producing thousands of smaller errors that never achieve sufficient visibility to generate disciplinary proceedings. Hallucination is not an isolated defect of a specific model at a specific moment: it is a statistical property inherent to current LLMs — one that, aggravating the situation, tends to increase as models become more sophisticated.
The most common defensive argument in these cases — "I didn't know the AI could be wrong in this way" — fails systematically. Not because courts are inflexible, but because the duty of technological competence already includes the duty to know this limitation. Ignorance about hallucinations does not mitigate liability; it constitutes it. The ignorance itself is the violation.
The dimension Murray adds — and which is genuinely new — is that of the agent that acts and produces documents without human intervention at each step. The candor violations documented so far have all derived from an affirmative act: a lawyer who reviews a filing and submits it without verifying citations. Agentic AI introduces the possibility of a violation by pure omission: the agent drafts and submits a procedural document — a motion for extension, for example — without the lawyer having read the final version line by line. If that document contains a fabricated citation, the violation occurs not through a decision to submit something unverified, but through an absence of decision: the lawyer did nothing, and the agent acted.
This is where the analysis becomes legally complex in terms of attribution. Professional deontology has always required lawyers to adopt an active role with respect to their work. Delegation to an autonomous agent does not eliminate that role — it displaces it. The lawyer who deploys an agent for procedural case management has no less obligation to review court documents than before; they have the same obligation, but now applied to the output of a system capable of producing them at a speed and volume radically exceeding their individual review capacity. The solution is not to abandon the agent: it is to design the supervision process — including mandatory stop points, automated checks, and human review thresholds — prior to deployment, and to document that design as part of the firm's professional protocol.
Expanded Confidentiality: The Risk of Autonomous Action
The duty of confidentiality with respect to generative AI had a relatively clear structure: the lawyer inputs data into the system, and the risk is that data being used for training or retrieved by third parties. The risk vector is the lawyer's deliberate act of entering data.
Agentic AI displaces that vector in a way Murray describes with precision: the risk no longer comes from what the lawyer types into the prompt. It comes from what the agent does autonomously in its process of fulfilling the assigned objective. An agent tasked with conducting due diligence on a target company may access the firm's document management system, retrieve client files related to prior transactions in the same sector, share that information with an external API for analysis, and produce a report embedding, within it, confidential data from third parties — all without the lawyer having authorized any of those individual steps.
This is not a speculative hypothesis. It is the logical consequence of agent architecture: their utility lies precisely in being able to act autonomously across external systems. If that autonomy is not technically bounded by specific guardrails — access restrictions to particular repositories, prohibitions on sharing data with unauthorized APIs, human review thresholds before external communication — the agent can produce confidentiality breaches the lawyer will never detect, because there was never a moment of conscious decision that could have been intercepted.
The practical implication is demanding: compliance with the duty of confidentiality before agentic AI requires prior technical work — defining agent permissions, accessible systems, actions executable without oversight, and those requiring human approval. That work is neither optional nor delegable to the technology vendor: it is part of the lawyer's deontological obligation as the system's deployer within the representation. Under the AI Act and the GDPR, it also carries an administrative regulatory dimension that layers sanctions on top of disciplinary consequences.
Billing as Symptom of a Structural Business Model Crisis
The question of billing might appear peripheral relative to the great deontological principles. Murray demonstrates that it is, in fact, the most faithful mirror of the structural contradiction AI introduces into the legal profession.
The consensus he describes is absolute and unanimous: no ethics authority, in any jurisdiction, has permitted a lawyer to bill the client for hours "saved" by AI. If a task previously requiring ten hours is now completed in two, the client pays for two hours. This principle is coherent with the logic of hourly billing, in which the fee remunerates actual time expended. AI simply compresses that time.
Agentic AI, however, introduces a dimension the existing consensus cannot resolve without reconfiguring the business model. If a lawyer spends thirty minutes defining an agent's objective, and the agent then works twenty hours autonomously to produce a complex due diligence report, how is that work billed? Thirty minutes of lawyer time plus agent cost? Twenty hours at associate rates? A fixed fee tied to the transaction value?
The correct answer, in deontological terms, cannot be any of the three without prior agreement with the client. And that prior agreement requires the client to understand what they are acquiring: not a person's time, but the analytical capacity of an autonomous system supervised by a person. When value delivered becomes completely disconnected from human work time, hourly billing loses its justification. Pressure toward fixed fees, value-based pricing, or subscription structures is not a market trend: it is a deontological consequence. Fee agreements that do not anticipate it are inherently opaque to the client and, to that extent, potentially in breach of the duty of communication and the requirement of reasonable fees.
Supervision in Layers: The Agent, the Associate, and the Partner
Murray's analysis of supervision before agentic AI is particularly useful for articulating responsibilities in a hierarchically structured firm. Agentic AI, as a "nonlawyer assistant" under Rule 5.3, is subject to the supervising lawyer's oversight duty. But when that deployment is performed by a junior associate — which is the most common scenario in practice, as associates tend to adopt new tools first — a double layer of liability appears: the partner must supervise both the associate and, indirectly, the agent the associate has deployed.
This scenario introduces two technical concepts that Murray integrates into the deontological analysis compellingly: observability and traceability. Observability is the capacity to see what the agent is doing while it does it — what steps it follows, what data it retrieves, how it structures its reasoning. Traceability — or debugging — is the capacity to reconstruct that process afterward, identifying at which point the agent diverged, failed, or produced an error. Without observability and traceability, the agent is a black box, and supervision becomes fictitious: the lawyer can review the output but cannot evaluate whether the process generating it was correct, whether it accessed unauthorized sources, or whether it incorporated incorrect information at an intermediate step that the final output does not reflect.
The practical consequence is demanding: not all agentic systems on the market offer these capabilities at a level sufficient to meet deontological supervision standards. Choosing an agent without observability is not simply a suboptimal technical decision — it may constitute a violation of the supervision duty itself. And that violation is attributable to the firm as an organization, not only to the individual lawyer who chose the tool, which triggers the collective liabilities of managing partners.
The K&L Gates case illustrates this logic clearly: sanctions fell not only on the lawyer who used AI to create the research outline, nor only on the lawyer at the other firm who incorporated it into the brief without verification. They fell on both firms as organizational entities, because neither had implemented the governance protocols the situation required. Absence of policy is not a mitigating factor — it is part of the violation.
Algorithmic Bias as a Judicial Legitimacy Problem
The transition to judicial ethics introduces a variable with no equivalent in lawyer ethics: impartiality is not merely a professional obligation — it is a constitutional guarantee whose violation affects not the professional but the person subject to the decision. When a lawyer uses a biased AI, the harm falls on their client. When a judge uses a biased AI in a conviction, custody, or liberty-deprivation decision, the harm falls on whoever is the object of that decision — someone who may have no way to detect, challenge, or appeal it.
The COMPAS case — the recidivism risk assessment tool that ProPublica's investigation demonstrated falsely flagged Black defendants as high-risk at significantly higher rates than white defendants — is the central analytical landmark of this section. The Wisconsin Supreme Court's decision in State v. Loomis (2016) permitted use of the instrument with written warnings about its limitations, but left unresolved the structurally relevant question: can those warnings be effective if the judge lacks sufficient statistical training to evaluate them, if institutional pressure favors efficiency over skepticism, and if the cognitive biases of automation and anchoring operate below the threshold of conscious decision-making?
Murray notes that AI's introduction into the judicial function transforms the nature of bias challenges. A traditional recusal motion alleges the personal bias of a specific judge. A challenge to an AI-assisted decision alleges the systemic, statistical bias of a tool — which requires expert testimony from data scientists, statisticians, and machine learning engineers currently outside ordinary judicial procedure. The consequence is that the right to a fair trial, in its dimension of effective access to challenge, may be being hollowed out in cases where judicial decisions incorporate recommendations from opaque systems.
The analysis connects directly to the European debate on explainability. GDPR Article 22 prohibits purely automated decisions producing significant legal effects, subject to exceptions. The AI Act, for high-risk systems in the administration of justice, requires effective human oversight and traceability. Both frameworks rest on the same premise underlying Murray's analysis: the legitimacy of a decision affecting fundamental rights cannot be sustained on the opacity of the process generating it — and human supervision that cannot evaluate the system's reasoning is not real supervision.
What the Analysis Cannot Close
There is a question no academic work can currently answer, and which deserves to be asked with intellectual honesty: what is the autonomy threshold beyond which human supervision ceases to be meaningful?
Regulation — European and American — assumes human oversight is a sufficient condition for the legitimacy of AI-assisted decisions. But that assumption has an empirical limit. A lawyer supervising the output of an agent that has processed ten thousand documents, executed two hundred database queries, and made a thousand classification micro-decisions cannot review that entire process in real time. What they review is the final output and, at most, a sample of intermediate reasoning. If the error or bias is embedded in steps not part of that sample, supervision does not catch it.
This does not mean human supervision is irrelevant. It means its efficacy varies with system design, work volume, and the supervisor's technical capabilities. It suggests professional obligations should be modulated according to those variables, rather than operating as a uniform standard applied equally to a document scheduling agent and a complex strategic litigation analysis agent.
Future regulation — both deontological and administrative — will need to address this differentiation. Supervision standards cannot be identical for a system assisting with procedural deadline management and one analyzing transaction risk or recommending litigation strategy. The proportionality principle the AI Act articulates in terms of risk categories must be transposed into professional ethics with the same logic.
Conclusions
-
Agentic AI does not create new deontological obligations from nothing: it redefines the operational content of existing obligations — competence, confidentiality, candor, supervision — in ways that render compliance through traditional means insufficient.
-
The duty of technological competence is dynamic: it includes operational knowledge of the deployed agent's architecture, data sources, authority limits, and failure mechanisms.
-
Confidentiality before agentic AI requires prior technical work — permission definition, guardrails, stop points — that forms part of the lawyer's deontological obligation as deployment responsibility holder.
-
Candor violations from hallucinations can occur through pure omission when the agent acts without human review at each step, requiring explicit verification protocols prior to deployment.
-
Hourly billing is structurally incompatible with full agentic AI use: client transparency and the reasonable fees requirement demand prior agreements anticipating alternative remuneration tied to value delivered.
-
Effective agent supervision requires observability and traceability as minimum technical system capabilities: choosing tools that do not offer them may itself constitute a supervision duty violation.
-
Algorithmic bias in the judicial function is not a technical problem with a technical solution: it is a democratic legitimacy problem requiring system transparency, judicial education, and adequate procedural tools for challenge.
-
The question of the meaningful supervision threshold remains open and must guide future deontological regulation, which cannot operate with uniform standards across the enormous variation in systems and deployment contexts.
Primary Source
This article analyzes and develops the arguments contained in the following academic work, available in open access:
Michael D. Murray, Algorithmic Ethics in an Era of Agentic AI Advocacy: An Analysis of AI's Impact on the Model Rules of Professional Conduct and the Model Code of Judicial Conduct, 16 St. Mary's Journal on Legal Malpractice & Ethics 279 (2026). University of Kentucky, J. David Rosenberg College of Law. Published April 13, 2026.
Available at Digital Commons, St. Mary's University: https://commons.stmarytx.edu/lmej/vol16/iss2/4