August 9, 2025

What the EU AI Act Means for US Companies: A Compliance Officer’s Guide

The EU AI Act applies to US companies selling AI in Europe. Learn how to classify risk, meet compliance, align with GDPR, and avoid fines before 2025 and 2026 deadlines.

EU AI actAI governanceAI risk managementGDPR complianceAI GDPR checklistUS companies AI regulation
What the EU AI Act Means for US Companies: A Compliance Officer’s Guide

The EU AI Act isn’t just a European problem. If your AI product reaches customers in the EU—or your clients do business there—you’re in scope, no matter where your headquarters are. For US companies, this means understanding how the Act applies across borders, where it overlaps with existing privacy and product regulations, and what practical steps you can take now to avoid blocked sales or fines later.

Why US companies can’t ignore it

The Act has extraterritorial reach—if you place an AI system on the EU market or put it into service in the EU, you must comply, even if all your development, hosting, and corporate structure are in the US. That’s the same “long arm” design US privacy teams already know from the GDPR.

The consequences of ignoring it are steep:

  • Fines up to €35 million or 7% of global annual turnover for prohibited uses.
  • Regulatory orders to withdraw or recall systems from the EU market.
  • Damage to reputation and client trust—especially if you’re in a supply chain serving regulated sectors.

The key trigger: “Placing on the market” or “putting into service”

These terms are broader than many US teams expect. It’s not just about selling AI directly to EU consumers. If you license a model to a partner who has EU users, or embed AI features in software deployed in Europe, you’ve crossed into regulated territory.

Examples that trigger the Act:

  • An API endpoint available to EU customers.
  • An embedded AI feature in a SaaS product sold into the EU.
  • A machine-learning component in physical goods shipped to Europe (e.g., IoT devices, robotics, vehicles).

Roles and responsibilities for US companies

The Act distinguishes between providers (those who develop and place the system on the market) and deployers (those who use it under their authority). You can be both. US companies often find themselves as providers for the systems they ship and deployers when using third-party AI internally.

This matters because providers bear heavier obligations—risk management, technical documentation, conformity assessment—while deployers focus on correct use, monitoring, and human oversight.

Risk-based classification applies to you too

The EU AI Act uses four tiers: prohibited, high-risk, limited-risk, and minimal-risk.

  • Prohibited: Certain manipulative, biometric, or social-scoring uses.
  • High-risk: Systems in Annex III categories (e.g., hiring, education, access to essential services) or safety components in regulated products.
  • Limited-risk: Mainly transparency requirements (e.g., chatbot disclosures, deepfake labelling).
  • Minimal-risk: No specific obligations.

A US company building HR screening AI for European clients? That’s high-risk. A marketing chatbot that serves EU visitors? Limited risk—still regulated, but lighter touch.

Overlaps with US frameworks and GDPR

For US companies already subject to GDPR because they process EU personal data, the AI Act adds a product safety and governance layer on top of privacy law. You’ll need:

  • A lawful basis for data processing (GDPR).
  • AI-specific controls like human oversight, risk management, and technical documentation (AI Act).

For certain sectors, you may already have quality systems (ISO 9001, ISO 13485, automotive ASPICE) that can be extended to meet AI Act QMS requirements.

Practical steps for US compliance teams

  1. Inventory your AI systems and flag those that reach the EU directly or via partners.
  2. Classify risk early to know your obligation level.
  3. Map your role (provider/deployer/importer/distributor) for each EU-facing system.
  4. Align your documentation with Annex IV templates—this is your audit-ready proof of compliance.
  5. Establish human oversight with real intervention capability, not just policy text.
  6. Prepare for transparency obligations—especially chatbot notices and deepfake labels.
  7. Integrate compliance into your release process so it’s not a last-minute scramble.

Why starting early matters

The first provisions (prohibited uses) kick in February 2, 2025. High-risk obligations apply from August 2, 2026—but the work to meet them, like bias testing and QMS setup, takes months. Waiting until a European client asks for your “EU AI Act compliance proof” is too late.

How WALLD can help US companies

WALLD automates the discovery of EU-facing AI systems, handles risk classification, scaffolds Annex IV documentation with evidence pulled from your existing tools, and monitors post-market performance. That means your US-based team can stay focused on product delivery while still meeting EU requirements and keeping your sales channels open.

Disclaimer: This guide is for informational purposes only and does not constitute legal advice. For specific guidance, consult qualified legal counsel.

Alex Makuch