Decision&LawAI Legal Intelligence
regulationinternational

EU AI Act: What US Lawyers Need to Know

Isla Vinter
March 28, 2026
15 min read
3,500 words
EU AI Actregulationcompliancecross-borderAI governanceinternational law

Educational Content – Not Legal Advice

This article provides general information. Consult a qualified attorney before taking action.

Disclaimer

This analysis is for educational purposes only and does not constitute legal advice. The information provided is general in nature and may not apply to your specific situation. Laws and regulations change frequently; verify current requirements with qualified legal counsel in your jurisdiction.

Last Updated: March 28, 2026

Originally published in Spanish on derechoartificial.com. Adapted for the US audience by Isla Vinter.

Key Takeaways

  • The EU AI Act applies extraterritorially to AI systems placed on the EU market and to providers operating AI systems on behalf of third parties in the EU—creating compliance obligations for US companies.

  • The Act classifies AI systems by risk level, with high-risk systems subject to the most stringent requirements including conformity assessments, technical documentation, and human oversight mandates.

  • US companies serving EU clients with AI-powered legal services may fall within the Act's scope and face significant penalties for non-compliance.

  • The Act's prohibition on social scoring and certain real-time biometric identification in public spaces sets limits that may affect US AI companies' EU operations.

  • Understanding the EU framework is essential for anticipating converging US regulatory requirements as Congress and federal agencies develop domestic AI governance standards.

Introduction

The European Union's Artificial Intelligence Act (Regulation (EU) 2024/1689), which entered into force in August 2024, represents the world's most comprehensive regulatory framework for artificial intelligence. For US legal professionals, understanding this regulation is no longer optional—it is essential for advising clients with EU operations, serving EU-based counterparties, or anticipating the direction of US domestic AI governance.

The Act's extraterritorial reach means that companies incorporated or operating in the United States may face direct compliance obligations when their AI systems affect EU residents or are placed on the EU market. This analysis provides a practical guide for US attorneys navigating these obligations.


The Act's Geographic Scope

When the EU AI Act Applies to US Companies

The regulation applies to:

  1. Providers placing AI systems on the EU market or putting them into service in the EU
  2. Deployers of AI systems located in the EU
  3. Providers and deployers in third countries where the AI system's output is used within the EU

This means a US company that:

  • Offers AI-powered legal services to EU-based clients
  • Uses AI tools to process data of EU residents
  • Operates AI systems that affect EU markets or individuals

...may be subject to the Act's requirements.

Definitions Key to Scope Analysis

AI System (Article 3(1)): Software developed through machine learning, logic- and knowledge-based approaches, that can, for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations, or decisions influencing environments they interact with.

Provider (Article 3(8)): A natural or legal person that develops an AI system and places it on the market or puts it into service under its own name or trademark.

Deployer (Article 3(4)): A natural or legal person that uses an AI system under its authority.

For US law firms serving EU-based clients, the critical question is whether the firm is acting as a provider (developing and deploying its own AI tools) or a deployer (using third-party AI tools). Both roles carry distinct obligations.


Risk Classification

The Act establishes a tiered risk-based approach:

Unacceptable Risk: Prohibited Practices (Article 5)

The following AI practices are banned outright:

  • Social scoring: AI systems that evaluate individuals based on social behavior or personality traits
  • Real-time remote biometric identification in public spaces for law enforcement purposes (with narrow exceptions)
  • Manipulation techniques: Subliminal techniques beyond consciousness or intentional exploitation of vulnerabilities
  • Certain uses of biometric categorization for sensitive characteristics

For US companies operating in the EU, these prohibitions set absolute limits regardless of system effectiveness.

High-Risk AI Systems (Annex III)

The Act imposes stringent requirements on high-risk AI systems, including:

AI systems in:

  • Employment and worker management
  • Access to essential services (including financial services, insurance)
  • Education and vocational training
  • Law enforcement
  • Migration, asylum, and border control
  • Administration of justice and democratic processes

Critical infrastructure:

  • AI systems in safety components of critical infrastructure

Legal services relevant classifications include:

  • AI systems used in the context of legal proceedings or alternative dispute resolution
  • AI systems intended to be used by courts and judicial authorities

Limited and Minimal Risk

Lower-risk systems face lighter requirements—transparency obligations for limited risk, voluntary codes of conduct for minimal risk.


Compliance Obligations for High-Risk Systems

For US companies with high-risk AI systems, the Act mandates:

Before Market Placement (Articles 10-17)

Risk management system: Establish, document, implement, and maintain a risk management system throughout the AI lifecycle.

Data governance: Ensure training, validation, and testing datasets meet quality criteria, address potential biases, and maintain appropriate data governance measures.

Technical documentation: Create documentation demonstrating conformity before market placement—maintained for 10 years.

Transparency: Provide instructions for use in plain language, disclose AI interaction capability, and inform deployers of system limitations.

During Operation (Articles 17, 20, 22)

Quality management system: Implement quality management system appropriate to system scope.

Incident monitoring: Log automatically occurring incidents and report serious incidents to national authorities.

Human oversight: Ensure human oversight measures are in place to prevent or minimize risks to health, safety, or fundamental rights.

Conformity Assessment (Article 43)

High-risk AI systems generally require third-party conformity assessment before market placement—though providers can self-assess for certain Annex III categories.


Penalties

The Act establishes significant penalties:

| Violation | Maximum Penalty | |-----------|---------------| | Non-compliance with prohibited practices | €35 million or 7% global annual turnover | | Violation of other requirements | €15 million or 3% global annual turnover | | Providing incorrect/mmisleading information | €7.5 million or 1.5% global annual turnover |

For a US company with €10 billion in global revenue, prohibited practice violations could exceed €700 million.


The Act's Interaction with US Law

Extraterritorial Application

The Act does not apply directly to companies operating exclusively in the US—but it applies whenever:

  • AI systems are placed on the EU market
  • AI system outputs are used within the EU
  • Services are provided to EU residents

This creates a de facto compliance obligation for any US company serving EU clients or processing EU residents' data.

GDPR Intersection

The Act works in tandem with the General Data Protection Regulation. AI systems processing personal data must comply with both frameworks:

  • GDPR lawful bases for processing
  • AI Act requirements for high-risk systems
  • Data protection impact assessments under GDPR may serve as documentation for AI Act conformity

Contractual Implications

EU counterparties increasingly require AI compliance representations in contracts. US companies should anticipate:

  • AI Act compliance certifications in vendor agreements
  • Audit rights for AI system conformity
  • Indemnification for AI Act violations
  • Liability caps conditioned on AI system compliance

Practical Guidance for US Practitioners

For Corporate Counsel Advising US Companies with EU Operations

  1. Map AI applications to Act risk categories—focus compliance resources on high-risk systems.

  2. Review vendor agreements for AI Act compliance representations—ensure vendors bear appropriate liability for conformity failures.

  3. Assess data flows from EU residents—AI processing of such data may trigger Act obligations.

  4. Update incident response protocols to include AI Act mandatory reporting timelines.

  5. Monitor enforcement—EU national authorities are developing enforcement priorities that will signal enforcement priorities.

For US Attorneys Serving EU-Based Clients

  1. Understand client compliance obligations—EU clients may require AI Act conformity as a contract condition.

  2. Advise on documentation requirements—AI systems used in client service delivery may require technical documentation.

  3. Review AI vendor contracts for AI Act compliance allocations.

  4. Monitor regulatory developments—Member state implementation varies, creating jurisdictional complexity.

Anticipating US Regulatory Convergence

The EU AI Act provides a template for domestic regulation. US practitioners should anticipate:

  • Federal AI legislation incorporating similar risk-based frameworks
  • Agency-specific AI guidance (FDA, FTC, CFPB) converging with EU standards
  • State-level AI regulations following EU precedents
  • Professional responsibility obligations for AI use in legal practice

Understanding the EU framework now positions practitioners to advise clients effectively as domestic requirements emerge.


Immediate Actions for US Companies

Within 90 days:

  • Inventory all AI systems serving EU clients or processing EU resident data
  • Classify systems by risk level under Act categories
  • Identify gaps in conformity documentation
  • Review vendor agreements for AI Act compliance allocation

Within 6 months:

  • Develop high-risk system conformity documentation
  • Implement human oversight measures for high-risk applications
  • Update incident response protocols for AI Act reporting
  • Train personnel on AI Act compliance requirements

Ongoing:

  • Monitor EU enforcement developments
  • Track Member State implementation variations
  • Assess emerging US AI regulatory requirements

Key Resources


About the Author

Isla Vinter specializes in privacy by design and algorithmic data governance. A former data protection advisor, she explores the implications of generative AI on legal confidentiality and attorney-client privilege.


This analysis is for educational purposes and does not constitute legal advice. Companies with potential EU AI Act obligations should consult qualified counsel in relevant jurisdictions.

Back to News