EU AI Act Art.2 Territorial Scope: When Does the EU AI Act Apply to Non-EU Developers? — Developer Guide (2026)
Your company is incorporated in the United States, Canada, or Singapore. Your AI product is hosted on servers outside the EU. Your engineering team has never set foot in Europe. And yet your EU customers are asking for an EU AI Act compliance statement.
Are they right to ask?
The answer, in the overwhelming majority of commercially relevant cases, is yes. EU AI Act Article 2 establishes a territorial scope that extends well beyond EU borders through four distinct applicability triggers. The regulation applies not because of where you are, but because of where your AI's output is used. This is the same extraterritorial design used in GDPR Art.3 — and like GDPR, it means that "we're not in the EU" is not a compliance exemption.
This guide covers the complete Art.2 territorial scope analysis: the four triggers, the key definitions that determine whether each trigger applies to you, the exemptions that can take you out of scope, what happens when you are in scope as a non-EU company, Python tooling to systematise the analysis, and a 25-item checklist.
The Four-Trigger Applicability Test
Article 2(1) establishes four independent triggers. Any one of them is sufficient to bring your AI system or model under the EU AI Act. You do not need all four — one is enough.
Trigger 1 — Providers Placing AI Systems on the EU Market [Art.2(1)(a)]
Text: The Regulation applies to providers that place AI systems on the market or put them into service in the Union.
What this means: If you are a provider (an entity that develops or has developed an AI system or GPAI model, or has an AI system or GPAI model developed, with a view to placing it on the market or putting it into service under its own name or trademark) and your AI system is placed on the EU market, you are in scope.
"Placing on the market" is defined in Art.3(9): making an AI system available for the first time on the Union market. For software, this occurs when the system is first made available for distribution or use by deployers or end-users in the EU — including via API, SaaS subscription, app store listing, or direct download.
Key point: The trigger is satisfied by making available — not by active marketing. If EU customers can access and use your AI system without restriction, and you have not geo-blocked EU access, you have likely placed it on the EU market regardless of whether you targeted EU customers.
Practical implications for SaaS developers:
- A US-based AI API accessible globally → on the EU market
- A closed-beta product with invitation-only EU access → likely on the EU market
- A product with EU IP geo-block + T&C EU exclusion → not on EU market (but verify)
- An enterprise product sold only to US-headquartered companies → check if they operate in EU
Trigger 2 — Providers Putting AI Systems Into Service in the EU [Art.2(1)(a)]
Text: Same provision as Trigger 1 — Art.2(1)(a) covers both "placing on the market" and "putting into service."
"Putting into service" is defined in Art.3(11): the supply of an AI system for first use directly to the deployer or for the deployer's own use on the Union market for its intended purpose. This is distinct from placing on the market and applies primarily to bespoke or custom AI systems built for a specific deployer in the EU — enterprise contracts, custom model development, or white-label AI products delivered directly to an EU customer for their deployment.
Practical implications:
- Building a custom AI model for an EU hospital under a service agreement → in scope
- White-labelling an AI product for an EU retailer's internal use → in scope
- Providing AI consulting services (no AI system delivered) → not directly in scope
Trigger 3 — Deployers Located in the EU [Art.2(1)(b)]
Text: The Regulation applies to deployers of AI systems that are established or located in the Union.
"Deployer" is defined in Art.3(4): a natural or legal person, public authority, agency, or other body using an AI system under its authority, except where the AI system is used in the course of a personal non-professional activity.
What this means: If an EU-located company uses your AI system, they become a deployer with their own EU AI Act obligations. This trigger does not typically create obligations for you as a non-EU provider — it creates obligations for your EU customer. However:
- If your EU customer's obligations require documentation from you (Art.13 transparency, Art.52 model card for GPAI), you must provide it.
- If your product's design prevents your EU deployer from meeting their obligations (e.g., no human oversight interface), the deployer may be non-compliant and choose not to use your product.
Practical implication: Trigger 3 primarily affects EU deployers, but creates indirect pressure on non-EU providers to support deployer compliance.
Trigger 4 — Extraterritorial Output Trigger [Art.2(1)(c)]
Text: The Regulation applies to providers and deployers of AI systems that are established in a third country, where the output produced by the AI system is used in the Union.
This is the key extraterritorial trigger. Unlike Triggers 1-3, which hinge on where AI is made available or who uses it, Trigger 4 applies based on where the AI system's output is consumed — regardless of where the provider or deployer is based, where the AI system runs, or who is the operator.
"Output used in the Union" — this phrase is intentionally broad. EU institutions have indicated that "used in the Union" encompasses:
- Content displayed to EU users (web applications, apps)
- Decisions affecting EU natural or legal persons (credit scoring, content moderation)
- Data processed on behalf of EU entities (B2B data processing)
- Recommendations or classifications affecting EU market participants
- AI-generated text, images, code consumed by EU residents
The GDPR parallel: Art.2(1)(c) follows the same design logic as GDPR Art.3(2) — the "monitoring behaviour" and "offering goods or services" extraterritorial triggers that brought every major US tech company under GDPR regardless of EU establishment. The EU AI Act's Art.2(1)(c) is the AI equivalent.
Practical implications:
- A US AI company with no EU employees but EU paying customers → Trigger 4 applies
- A Singapore AI startup whose API is used by EU fintech firms → Trigger 4 applies
- A Canadian AI research lab whose open-weight models are downloaded and used by EU companies → likely Trigger 4 applies
- An Indian AI company providing AI-powered back-office services to EU multinationals → Trigger 4 applies
Trigger 4 in Detail: What "Output Used in the Union" Means for Developers
The extraterritorial trigger deserves its own analysis because it is the most consequential for non-EU AI companies and the least understood.
The Two-Step Analysis
Step 1: Does your AI system produce output? This is nearly always yes. Any AI system that generates, classifies, recommends, predicts, or decides produces output. Trigger 4 applies unless you can demonstrate the output never reaches the EU.
Step 2: Is any of that output used in the Union? "Used in the Union" does not require the output to be the final product seen by EU users. It applies to:
| Scenario | "Used in Union"? |
|---|---|
| SaaS product with EU user base | Yes — users consume output directly |
| B2B API used by EU companies | Yes — EU company consumes output in their product |
| AI processing pipeline where EU company receives results | Yes — EU entity uses the processed output |
| AI system running on EU-hosted infrastructure | Yes — output generated and consumed within EU |
| AI system where output is anonymised before EU delivery | Depends — if the anonymisation itself is AI output used in EU, yes |
| AI system where all customers are US entities with no EU operations | Potentially no — but verify customer operations |
Geographic Routing Does Not Override Trigger 4
A common misconception: "Our servers are in the US, so EU law doesn't apply." Art.2(1)(c) explicitly covers providers established in third countries where output is used in the EU. Server location is irrelevant to the scope trigger. If EU users or EU businesses consume your AI output, you are in scope.
Supply Chain Trigger
If your AI model is used by another company (Downstream Integration), and that company's product is used in the EU, you may have secondary obligations as a GPAI provider under Art.55 — even if you have no direct relationship with EU end-users. The Art.2 scope analysis must include: "Who uses my output, and do their customers include EU persons?"
Key Definitions That Determine Scope
Whether each Art.2(1) trigger applies depends on how you fit the regulatory definitions.
"Provider" [Art.3(3)]
A natural or legal person, public authority, agency, or other body that develops an AI system or GPAI model, or that has an AI system or GPAI model developed, and places that system or model on the market or puts the system into service under its own name or trademark, whether for payment or free of charge.
Note: The developer and the provider can be different entities. If you commission an AI system from a third-party developer but release it under your own brand, you are the provider and bear provider obligations.
"Deployer" [Art.3(4)]
A natural or legal person, public authority, agency, or other body using an AI system under its authority, except where the AI system is used in the course of a personal non-professional activity.
Note: Deployers are the entities that use AI systems in their own products or processes — distinct from end-users who interact with the deployer's product. An EU bank using an AI credit scoring API is a deployer. The bank's customer who submits a loan application is an end-user, not a deployer.
"Placing on the Market" [Art.3(9)]
The first making available of an AI system or a general-purpose AI model on the Union market.
Critical nuance: "First making available" means the moment EU access becomes possible, not the moment you first get an EU paying customer. If your API is publicly accessible from EU IP addresses, you have placed it on the EU market from the moment of public launch.
"Putting into Service" [Art.3(11)]
The supply of an AI system for first use directly to the deployer or for own use on the Union market for its intended purpose.
Art.2 Exemptions: What Takes You Out of Scope
Article 2 includes several exemptions that can remove an AI system from the Regulation's scope. None of these exemptions are blanket waivers — each has conditions.
Military and National Security Exemption [Art.2(3)]
AI systems used exclusively for military, defence, or national security purposes are exempt, but only where such purposes fall within the exclusive competence of a Member State under EU Treaties (i.e., not EU-level competence). This exemption is narrow and does not apply to dual-use AI systems with commercial applications alongside defence use.
Scope: Commercial AI companies almost never qualify for this exemption.
Scientific Research and Development [Art.2(6)]
AI systems developed and put into service solely for the purpose of scientific research and development are exempt. This covers academic research institutions building experimental AI for research purposes.
Conditions:
- The AI system must be used solely for research — no commercial deployment
- Outputs must remain within the research environment and not be released to external users as a product or service
- The exemption covers the research activity, not pre-commercial testing leading to a product
Scope: A university AI lab building an experimental model for peer-reviewed research → exempt. A startup building a product with "research" in its marketing copy → not exempt.
Personal Non-Professional Use [Art.2(6) end + Art.3(4)]
AI systems used in the course of a personal non-professional activity are exempt from deployer obligations. However, this exemption applies to the deployer classification, not to the provider. The provider of the underlying AI system is not exempted because end-users use it personally.
Scope: Consumer app providers are still in scope as providers even if users use the app personally.
Open-Source Model Exemption [Art.2(12)]
This exemption applies to GPAI models with open-weight release under a free and open-source licence, subject to conditions. Under Art.2(12), such models are exempt from certain Art.53 GPAI transparency and documentation obligations unless the model:
- Has systemic risk (Art.51 threshold — 10^25 FLOPs or AI Office designation)
- Is placed on the market under a commercial licence
- Is modified and placed on the market or into service under the modifier's name
Critical limitation: The Art.2(12) exemption applies to Art.52-56 documentation and CoP obligations — it does not exempt the model from Art.5 prohibited practices. An open-source model used for prohibited purposes (e.g., social scoring, real-time biometric surveillance in public spaces) is not exempted.
Scope: Open-source GPAI providers like those releasing Llama variants under open licences may qualify for partial exemption from Art.52-56 — but must still comply with Art.5 and Art.50, and lose the exemption if they commercialise the model.
Pre-Commercial Testing and Prototyping [Art.2(8)]
AI systems in a pre-placement testing phase under controlled conditions are not yet "placed on the market" and therefore not yet in scope. This exemption is time-limited: it covers the period from development through to market placement.
Scope: Internal development and limited closed-beta with non-commercial test users → not yet on market. Open beta with any user able to sign up → on market.
Non-EU Company: What Being "In Scope" Requires
If any Art.2(1) trigger applies to you as a non-EU provider, you face the same obligations as EU providers — with one addition.
Authorized Representative Obligation [Art.54]
Non-EU providers placing AI systems on the EU market or putting them into service in the EU must designate an authorized representative established in the EU before market placement.
The authorized representative:
- Holds the technical documentation and Declaration of Conformity (or EU database registration for high-risk AI)
- Acts as the point of contact for EU market surveillance authorities
- Is jointly and severally liable with the provider in some Member States
- Must be a natural or legal person established in the EU (not just an EU-registered branch of the same entity)
See the detailed Art.54 guide: EU AI Act Art.54: Authorized Representative for Non-EU GPAI Providers
Provider Obligations Apply in Full
Being a non-EU provider does not reduce the substantive obligations. If you are in scope:
| AI System Type | Key Obligations |
|---|---|
| Non-high-risk AI | Art.50 transparency obligations if interacting with humans; Art.5 prohibited practices |
| High-risk AI (Annex III) | Art.9-22 full provider obligations: risk management, data governance, technical documentation, logging, transparency, human oversight, accuracy/robustness, conformity assessment, EU database registration, CE marking |
| GPAI model | Art.52: Annex XI documentation, model card, copyright policy, training data summary. Art.53 if systemic risk: adversarial testing, incident reporting, energy reporting |
Market Surveillance Reaches Non-EU Providers
Market surveillance authorities (MSAs) in Member States have powers under Art.74-76 to investigate AI systems in their market regardless of provider location. For high-risk AI, the EU database (Art.22) enables MSAs to identify and contact providers. For GPAI, the AI Office conducts oversight directly.
Non-EU providers who fail to designate an authorized representative and are subsequently found to be non-compliant face:
- Fines up to €15 million or 3% of global annual turnover (Art.99(3))
- Market access restrictions for non-compliant AI systems
Digital Omnibus Amendments to Art.2
The EU AI Act Digital Omnibus (COM(2025) 164), introduced in February 2025, proposes amendments to Art.2 that affect territorial scope:
SME Threshold Adjustment [proposed Art.2(13)]
The Digital Omnibus proposes a simplified compliance pathway for micro-enterprises and small enterprises (as defined in Commission Recommendation 2003/361/EC — fewer than 50 employees, balance sheet under €10 million). Under the proposed amendment, SMEs using high-risk AI systems not listed in Annex III (Sections 1-3) may use a simplified self-assessment rather than third-party conformity assessment.
Status: Proposal under trilogue negotiation as of April 2026. Not yet in force.
Clarification of "Deployer" Location [proposed Art.2(1)(b)]
The Digital Omnibus clarifies that the "located in the Union" language for deployers (Trigger 3) applies to deployers established or habitually residing in the Union — aligning with GDPR Art.3(1) establishment language. This closes a gap where deployers with EU operations but non-EU legal establishment argued they were out of scope.
Status: Proposal under negotiation.
Biometric System Scope Expansion [proposed Art.2(1)(d)]
The Digital Omnibus proposes adding a fifth trigger covering providers and deployers of real-time remote biometric identification systems used in publicly accessible spaces in the EU, regardless of purpose. This would expand scope beyond the current Art.5(1)(h) prohibition framework to bring all RTBID providers into the general compliance framework.
Status: Proposal under negotiation.
Scope Decision Framework: Is Your AI Product in Scope?
Work through this decision tree for each AI system you develop or operate:
1. Is this AI system used exclusively for military/national security?
→ Yes: Exempt under Art.2(3). Document exemption rationale.
→ No: Continue.
2. Is this AI system used solely for EU-internal scientific research
with no external deployment?
→ Yes: Exempt under Art.2(6). Verify "solely research" condition.
→ No: Continue.
3. Is the AI system a GPAI model released under open-source licence
with no commercial terms and no systemic risk?
→ Yes: Partial exemption under Art.2(12) — still check Art.5.
→ No: Continue.
4. Are any of these true?
(a) You place/make the AI system available to EU users or companies
(b) You supply the AI system to an EU deployer
(c) Your AI system's output is consumed by EU persons or organisations
→ Any Yes: You are IN SCOPE. Proceed to obligation analysis.
→ All No: Document your scope exclusion reasoning with evidence
(geo-blocking verification, customer list analysis, output routing logs).
Obligation Tiers Based on AI System Classification
Once in scope, your obligations depend on how your AI system is classified.
Classification Step 1: Is It Prohibited? [Art.5]
Art.5 applies from 2 February 2025 — the first enforcement date. Prohibited AI practices include:
- Social scoring systems that rate individuals by public authorities
- Real-time remote biometric identification in public spaces (with narrow exceptions)
- AI that exploits vulnerabilities of protected groups
- Subliminal manipulation techniques that cause harm
- Emotion inference systems in workplace/educational settings (with exceptions)
- Predictive policing based solely on profiling
- Mass untargeted scraping of facial images from internet/CCTV
- Biometric categorisation by protected characteristics (race, political views, etc.)
Geographic reach of Art.5: Applies to all AI systems within Art.2 scope — including extraterritorial providers under Trigger 4. If EU residents are affected by your AI system's output, the Art.5 prohibited practices apply.
Classification Step 2: Is It a GPAI Model? [Art.51-56]
GPAI obligations apply from 2 August 2025. A GPAI model is defined in Art.3(63): an AI model trained on large amounts of data, displaying significant generality, capable of performing a wide range of distinct tasks, and which can be integrated into downstream systems.
GPAI applicability triggers:
- You train a foundation model (LLM, multimodal, embedding model) → GPAI provider
- You fine-tune a GPAI model and release it under your own name → GPAI provider
- You provide API access to a GPAI model → GPAI provider
- Systemic risk? 10^25 FLOPs training compute or AI Office designation → Art.53 obligations
Classification Step 3: Is It High-Risk? [Art.6 + Annex III]
High-risk AI obligations apply from 2 August 2026 (main deadline). An AI system is high-risk if it falls within the categories listed in Annex III and is not within an Art.6(3) exception:
| Annex III Category | Examples |
|---|---|
| 1. Biometric systems | Remote identification, emotion recognition (non-excluded), biometric categorisation |
| 2. Critical infrastructure | AI managing energy, water, transport, financial infrastructure |
| 3. Education | AI for admissions, assessment, dropout prediction |
| 4. Employment | CV screening, performance management, promotion decisions |
| 5. Essential private services | Credit scoring, insurance, mortgage risk assessment |
| 6. Law enforcement | Risk assessment tools, evidence reliability evaluation |
| 7. Migration/asylum | Document authenticity verification, risk assessment, visa processing |
| 8. Administration of justice | AI for dispute resolution, evidence evaluation |
Digital Omnibus note: The proposed Omnibus simplifies Art.6(2) by narrowing which biometric systems are high-risk, and introduces a clearer exception for AI systems that are narrow-purpose tools within larger human-managed workflows.
Python: TerritorialScopeAnalyzer
from dataclasses import dataclass, field
from enum import Enum
from typing import Optional
class ScopeTrigger(Enum):
PLACED_ON_EU_MARKET = "art2_1_a_placed_on_market"
PUT_INTO_SERVICE_EU = "art2_1_a_put_into_service"
EU_DEPLOYER = "art2_1_b_eu_deployer"
OUTPUT_USED_IN_EU = "art2_1_c_output_used_in_eu"
NOT_IN_SCOPE = "not_in_scope"
class ExemptionBasis(Enum):
MILITARY_NATIONAL_SECURITY = "art2_3_military_national_security"
SCIENTIFIC_RESEARCH = "art2_6_scientific_research"
PERSONAL_NON_PROFESSIONAL = "art2_6_personal_use"
OPEN_SOURCE_GPAI = "art2_12_open_source_gpai"
PRE_COMMERCIAL_TESTING = "art2_8_pre_commercial"
class AIClassification(Enum):
PROHIBITED = "art5_prohibited"
GPAI_SYSTEMIC_RISK = "art53_gpai_systemic_risk"
GPAI_GENERAL = "art52_gpai_general"
HIGH_RISK = "art6_annex_iii_high_risk"
LIMITED_RISK = "art50_limited_risk"
MINIMAL_RISK = "minimal_risk_no_specific_obligation"
@dataclass
class CompanyProfile:
name: str
eu_establishment: bool
countries_of_operation: list[str]
has_eu_customers: bool
has_eu_deployer_customers: bool # B2B customers who are EU companies
output_consumed_by_eu_entities: bool
is_open_source_gpai: bool = False
is_military_only: bool = False
is_research_only: bool = False
is_pre_commercial: bool = False
@dataclass
class AISystemProfile:
name: str
training_flops: Optional[float] = None # For GPAI systemic risk check
annex_iii_category: Optional[str] = None # If high-risk
performs_prohibited_practice: bool = False
generates_output_interacting_with_humans: bool = False
is_gpai_model: bool = False
@dataclass
class ScopeAnalysisResult:
in_scope: bool
triggers_met: list[ScopeTrigger]
exemptions_applicable: list[ExemptionBasis]
ai_classification: AIClassification
authorized_rep_required: bool
key_obligations: list[str]
notes: list[str]
def analyze_territorial_scope(
company: CompanyProfile,
ai_system: AISystemProfile
) -> ScopeAnalysisResult:
"""
Analyze EU AI Act Art.2 territorial scope for a given company and AI system.
Returns a structured result indicating whether the AI system is in scope,
which triggers are met, applicable exemptions, and key obligations.
"""
triggers = []
exemptions = []
notes = []
# Check exemptions first
if company.is_military_only:
exemptions.append(ExemptionBasis.MILITARY_NATIONAL_SECURITY)
if company.is_research_only:
exemptions.append(ExemptionBasis.SCIENTIFIC_RESEARCH)
if company.is_pre_commercial:
exemptions.append(ExemptionBasis.PRE_COMMERCIAL_TESTING)
if company.is_open_source_gpai and ai_system.is_gpai_model:
exemptions.append(ExemptionBasis.OPEN_SOURCE_GPAI)
notes.append("Open-source GPAI: partial exemption from Art.52-56 only. Art.5 still applies.")
# If fully exempt, return early
fully_exempt = (
ExemptionBasis.MILITARY_NATIONAL_SECURITY in exemptions or
ExemptionBasis.SCIENTIFIC_RESEARCH in exemptions or
ExemptionBasis.PRE_COMMERCIAL_TESTING in exemptions
)
if fully_exempt:
return ScopeAnalysisResult(
in_scope=False,
triggers_met=[ScopeTrigger.NOT_IN_SCOPE],
exemptions_applicable=exemptions,
ai_classification=AIClassification.MINIMAL_RISK,
authorized_rep_required=False,
key_obligations=[],
notes=notes + ["Exemption applies — document basis and maintain records."]
)
# Check Art.2(1) triggers
if company.eu_establishment:
triggers.append(ScopeTrigger.PLACED_ON_EU_MARKET)
if company.has_eu_customers:
triggers.append(ScopeTrigger.PLACED_ON_EU_MARKET)
if company.has_eu_deployer_customers:
triggers.append(ScopeTrigger.EU_DEPLOYER)
if company.output_consumed_by_eu_entities:
triggers.append(ScopeTrigger.OUTPUT_USED_IN_EU)
# Remove duplicates
triggers = list(set(triggers))
in_scope = len(triggers) > 0
# Determine classification
if ai_system.performs_prohibited_practice:
classification = AIClassification.PROHIBITED
elif ai_system.is_gpai_model:
systemic_risk_threshold = 1e25 # 10^25 FLOPs
if ai_system.training_flops and ai_system.training_flops >= systemic_risk_threshold:
classification = AIClassification.GPAI_SYSTEMIC_RISK
else:
classification = AIClassification.GPAI_GENERAL
elif ai_system.annex_iii_category:
classification = AIClassification.HIGH_RISK
elif ai_system.generates_output_interacting_with_humans:
classification = AIClassification.LIMITED_RISK
else:
classification = AIClassification.MINIMAL_RISK
# Determine authorized representative requirement
authorized_rep_required = (
in_scope and
not company.eu_establishment and
classification not in [AIClassification.MINIMAL_RISK, AIClassification.PROHIBITED]
)
# Determine key obligations
obligations = []
if in_scope:
if classification == AIClassification.PROHIBITED:
obligations.append("STOP: AI system falls within Art.5 prohibited practices. Cease deployment.")
elif classification == AIClassification.GPAI_SYSTEMIC_RISK:
obligations.extend([
"Art.52: Annex XI documentation, model card, copyright policy, training data summary",
"Art.53: Adversarial testing (Art.55 red-teaming), serious incident reporting, energy reporting",
"Art.56: GPAI Code of Practice adherence (conformity presumption)",
"Art.54: Authorized representative in EU (if not EU-established)",
])
elif classification == AIClassification.GPAI_GENERAL:
obligations.extend([
"Art.52(1)(a): Annex XI technical documentation for downstream providers",
"Art.52(1)(b): Machine-readable model card for downstream integrators",
"Art.52(1)(c): Verifiable copyright compliance policy",
"Art.52(2): Public training data summary",
"Art.54: Authorized representative in EU (if not EU-established)",
])
elif classification == AIClassification.HIGH_RISK:
obligations.extend([
"Art.9: Risk management system (living document, continuous)",
"Art.10: Data governance and data quality measures",
"Art.11 + Annex IV: Technical documentation",
"Art.12: Automatic logging of events",
"Art.13: Transparency and information to deployers",
"Art.14: Human oversight measures",
"Art.15: Accuracy, robustness, cybersecurity",
"Art.16-17: Provider obligations + quality management system",
"Art.18: Post-market monitoring plan",
"Art.19: Serious incident reporting (Art.73)",
"Art.22: EU database registration",
"Art.43: Conformity assessment",
"Art.47: EU Declaration of Conformity",
"Art.48-49: CE marking",
"Art.54: Authorized representative in EU (if not EU-established)",
])
elif classification == AIClassification.LIMITED_RISK:
obligations.extend([
"Art.50(1): Disclose AI interaction to users in real time (chatbot/avatar)",
"Art.50(2): Label AI-generated content (deepfakes, synthetic media)",
"Art.50(4): Mark AI-generated text on matters of public interest",
])
if authorized_rep_required:
notes.append(
"AUTHORIZED REPRESENTATIVE REQUIRED: Non-EU provider must designate an EU-established "
"representative before placing AI on EU market. See Art.54."
)
return ScopeAnalysisResult(
in_scope=in_scope,
triggers_met=triggers if in_scope else [ScopeTrigger.NOT_IN_SCOPE],
exemptions_applicable=exemptions,
ai_classification=classification,
authorized_rep_required=authorized_rep_required,
key_obligations=obligations,
notes=notes
)
# Example: US-based AI API startup
company = CompanyProfile(
name="ExampleAI Inc. (San Francisco)",
eu_establishment=False,
countries_of_operation=["US"],
has_eu_customers=True, # EU companies use their API
has_eu_deployer_customers=True,
output_consumed_by_eu_entities=True,
is_open_source_gpai=False,
is_military_only=False,
is_research_only=False,
is_pre_commercial=False
)
ai_system = AISystemProfile(
name="ExampleAI Text Generation API",
is_gpai_model=True,
training_flops=5e24, # Below systemic risk threshold
generates_output_interacting_with_humans=True,
)
result = analyze_territorial_scope(company, ai_system)
print(f"In scope: {result.in_scope}")
print(f"Triggers: {[t.value for t in result.triggers_met]}")
print(f"Classification: {result.ai_classification.value}")
print(f"Authorized rep required: {result.authorized_rep_required}")
print(f"Key obligations:")
for o in result.key_obligations:
print(f" - {o}")
Output:
In scope: True
Triggers: ['art2_1_a_placed_on_market', 'art2_1_b_eu_deployer', 'art2_1_c_output_used_in_eu']
Classification: art52_gpai_general
Authorized rep required: True
Key obligations:
- Art.52(1)(a): Annex XI technical documentation for downstream providers
- Art.52(1)(b): Machine-readable model card for downstream integrators
- Art.52(1)(c): Verifiable copyright compliance policy
- Art.52(2): Public training data summary
- Art.54: Authorized representative in EU (if not EU-established)
EU Hosting and the Territorial Scope Puzzle
One question arises repeatedly: Does hosting your AI system on EU infrastructure affect your Art.2 scope analysis?
The answer: server location does not determine whether you are in scope — Art.2(1)(c) explicitly covers third-country providers. But EU hosting has significant implications once you are in scope:
-
Technical documentation storage: Art.11 and Art.52 documentation must be accessible to market surveillance authorities. EU-hosted documentation eliminates CLOUD Act compellability risk.
-
Authorized representative: The Art.54 authorized representative holds documentation. If documentation is EU-hosted, the representative has direct access without cross-border data transfer complications.
-
GPAI energy reporting: Art.53(1)(d) energy reporting requires data centre provenance information. EU data centres can certify PUE and renewable energy fractions under EU standards.
-
Art.10 data governance: For high-risk AI, Art.10(5) requires that personal data used in training meet GDPR standards. EU-based training infrastructure simplifies GDPR Art.44 (cross-border transfer) compliance.
Practical implication: Hosting in the EU does not take you out of Art.2 scope if your AI is already in scope — but it significantly simplifies compliance for documentation, data governance, and enforcement readiness.
25-Item Territorial Scope Compliance Checklist
Scope Determination (Items 1-8)
- 1. Applied the four-trigger Art.2(1) test to each AI system in your portfolio
- 2. Documented evidence for each trigger determination (customer lists, access logs, output routing)
- 3. Verified whether any exemptions (Art.2(3), (6), (8), (12)) apply and documented the basis
- 4. Identified all AI systems that qualify as GPAI models under Art.3(63)
- 5. Identified all AI systems that qualify as high-risk under Art.6 + Annex III
- 6. Confirmed whether any AI systems perform Art.5 prohibited practices
- 7. Assessed supply chain: do downstream users of your AI have EU operations?
- 8. Assessed whether Digital Omnibus amendments (when in force) change any determination
Non-EU Provider Requirements (Items 9-14)
- 9. Identified whether authorized representative designation is required under Art.54
- 10. Selected and engaged an EU-established authorized representative (if required)
- 11. Provided authorized representative with technical documentation and Declaration of Conformity
- 12. Established written mandate with authorized representative covering Art.54 obligations
- 13. Updated AI system documentation to name the authorized representative
- 14. Verified authorized representative is listed in EU AI database (for high-risk AI, Art.22)
Prohibited Practices Review (Items 15-17)
- 15. Reviewed each AI system against all Art.5(1) prohibited practices (a)-(l) including Digital Omnibus additions
- 16. Documented that no AI system performs real-time remote biometric identification in public spaces without an Art.5(1)(h) exception
- 17. Documented that no AI system performs social scoring, emotion inference (workplace/education), or predictive policing based on profiling
Obligation Implementation (Items 18-22)
- 18. GPAI providers: Annex XI documentation drafted and maintained
- 19. GPAI providers: Machine-readable model card published and accessible to downstream providers
- 20. GPAI providers: Copyright compliance policy established and verifiable
- 21. High-risk AI providers: Risk management system (Art.9) operational and documented
- 22. High-risk AI providers: Conformity assessment initiated with target completion before August 2026 deadline
Ongoing Compliance (Items 23-25)
- 23. Scope review process established: re-run Art.2 analysis when launching in new markets, adding EU-facing features, or changing output distribution
- 24. Monitoring process for Art.2 amendments via Digital Omnibus (when enacted)
- 25. EU jurisdiction documentation (technical docs, FRIA, risk register) stored in EU-sovereign infrastructure to eliminate CLOUD Act compellability exposure
Common Scope Analysis Mistakes
Mistake 1: "We don't market to EU, so we're not in scope"
Marketing intent is irrelevant to Art.2 scope. If EU companies or residents can access and use your AI without restriction, you have placed it on the EU market. Many B2B AI companies discover they are in scope only when an EU enterprise customer requests an AI Act compliance questionnaire.
Fix: Run an active customer/user analysis quarterly. Identify EU entities in your customer base. If present, apply Art.2 triggers.
Mistake 2: "Our servers are outside the EU, so EU law doesn't apply"
Art.2(1)(c) explicitly covers third-country providers where output is used in the EU. Server location is categorically irrelevant to Art.2 scope. This is identical to the GDPR Art.3(2) extraterritorial trigger that applied GDPR globally.
Fix: Replace "where are our servers?" with "where is our AI's output consumed?"
Mistake 3: "We're too small to matter"
Art.99 fines apply based on global annual turnover. A €2M ARR startup using AI for prohibited practices faces fines up to €35 million (Art.99(1) — 7% global turnover, with a minimum that can exceed company revenue). The EU AI Act does not include a de minimis turnover threshold for Art.5 violations.
Fix: Conduct Art.5 prohibited practices review regardless of company size. The exemptions for small companies are structural (Art.60 sandbox access, Art.96 Commission guidance) — not an exemption from Art.5.
Mistake 4: "We have an open-source model, so we're exempt"
Art.2(12) provides partial exemption for open-source GPAI models — specifically from Art.52-56 documentation and CoP obligations — subject to conditions. It does not exempt open-source models from Art.5 prohibited practices. An open-source model used for social scoring or biometric mass surveillance is not exempt.
Fix: Apply Art.5 review to all AI systems regardless of licence type. Apply Art.2(12) exemption only to Art.52-56 documentation obligations, and verify the model meets all Art.2(12) conditions (non-commercial licence, no systemic risk, not modified and re-released commercially).
Related Articles
- EU AI Act Art.54: Authorized Representative for Non-EU GPAI Providers
- EU AI Act Art.5: Prohibited AI Practices — Complete Developer Guide
- EU AI Act Art.6: High-Risk AI Classification and Annex III Categories
- EU AI Act Art.52: GPAI Model General Obligations
- EU AI Act Art.103: Entry into Force and Application Dates — Compliance Timeline
- EU AI Act GPAI CoP Chapter 1: Transparency and Capability Evaluation