EU AI Act Art.1: Subject Matter and Scope — What the Regulation Covers and Who It Applies To (2026)
Most developers approaching the EU AI Act start in the wrong place. They go straight to Annex III to check whether their system is high-risk, or they try to determine whether they are a "provider" or a "deployer" under Art.3. These questions matter — but they are downstream of a more fundamental question: what is this regulation actually trying to do?
Article 1 answers that question. It is the shortest article in the regulation and the most frequently overlooked. That is a mistake. Art.1 is the interpretive anchor for the entire EU AI Act. Every gray-area decision — whether a borderline AI system qualifies as high-risk, whether an exemption applies, whether voluntary compliance is commercially advisable — runs through the objectives Art.1 states.
What Article 1 Actually Says
Article 1(1):
The purpose of this Regulation is to improve the functioning of the internal market by laying down a uniform legal framework in particular for the development, placing on the market, putting into service and use of artificial intelligence systems (AI systems) in the Union, in accordance with Union values, to promote the uptake of human-centric and trustworthy artificial intelligence (AI), while ensuring a high level of protection of health, safety, fundamental rights as enshrined in the Charter, and the rule of law, and to address adverse impacts of AI systems in the Union.
Article 1(2):
This Regulation applies to AI systems and GPAI models that are placed on the market, put into service or used in the Union.
Five things in those two paragraphs that developers should internalize:
1. Internal Market Harmonisation Is the Primary Objective
Art.1 is explicit: the regulation's primary legal basis is internal market harmonisation (TFEU Art.114). This is not a fundamental rights instrument in the primary sense — it is a single-market instrument that also protects fundamental rights. The distinction matters for how courts and regulators will interpret ambiguous provisions.
When a regulator decides whether to classify a borderline AI system as high-risk, the question is not only "does this system pose risks?" but also "does treating it as high-risk create a disproportionate barrier to the internal market?" Art.1 keeps both sides of that balance visible.
2. The Risk-Based Approach Is an Architectural Statement
The regulation's structure — prohibited practices (Art.5), high-risk systems (Arts.6-50), GPAI models (Arts.51-56), limited-risk transparency (Art.50), minimal-risk voluntary codes (Art.95) — flows directly from the Art.1 statement that the regulation should "promote the uptake" of AI while "ensuring a high level of protection."
This is not a coincidence. The risk-based architecture is how the regulation attempts to balance these two objectives. Understanding Art.1 means understanding why the risk tiers exist and what function they serve, which in turn helps you classify your system correctly and argue for the right classification when there is genuine ambiguity.
3. "Human-Centric and Trustworthy AI" Is an Interpretive Criterion
The phrase "human-centric and trustworthy artificial intelligence" appears in Art.1 as a stated goal. It reappears throughout the regulation's recitals and is the headline of the EU's AI strategy documents. For developers, this phrase is not marketing language — it is a substantive interpretive criterion.
When the language of a specific provision is ambiguous, regulators and courts will interpret it in the direction that best serves "human-centric and trustworthy AI." If your compliance documentation frames your AI system's design choices in those terms — explaining how your human oversight mechanisms, transparency obligations, and risk management architecture serve human-centric AI principles — you are speaking the regulation's own language.
4. The Charter of Fundamental Rights Is Directly Referenced
Art.1 explicitly ties the regulation to the EU Charter of Fundamental Rights: health, safety, "fundamental rights as enshrined in the Charter." This is legally significant. It means that provisions of the EU AI Act must be interpreted consistently with Charter rights, including:
- Art.7 Charter: Right to private and family life → relevant to AI systems processing personal data
- Art.8 Charter: Protection of personal data → intersection with GDPR
- Art.21 Charter: Non-discrimination → directly relevant to high-risk AI in employment, credit, education
- Art.41 Charter: Right to good administration → relevant to automated decision-making by public authorities
If your AI system implicates Charter rights, that is not just a GDPR or fundamental rights question — it is also an EU AI Act question, because Art.1 makes the Charter directly relevant to how every provision of the regulation should be interpreted.
5. Art.1(2) States the Core Territorial Trigger
"placed on the market, put into service or used in the Union"
Three separate triggers, any one of which brings a system within scope:
| Trigger | What It Means | Who It Catches |
|---|---|---|
| Placed on the market | First commercial availability in EU | Non-EU providers selling into EU |
| Put into service | First deployment for intended purpose in EU | Importers, distributors deploying on behalf of non-EU providers |
| Used in the Union | Active use by EU-based person | Any system whose output affects EU persons, regardless of where the provider is located |
The "used in the Union" trigger is the broadest. A US-based AI company that never sells into the EU market but whose API is called by EU-based developers or whose system processes requests from EU users can be caught by Art.1(2). This is the EU AI Act's extraterritorial reach — narrower than the GDPR's approach but real.
Key Exclusions That Art.1 Implies (and Art.2 Specifies)
Art.1 sets the scope and Art.2 carves out exceptions. The major exclusions developers encounter are:
Military and National Security (Art.2(3))
AI systems developed or used exclusively for military, defence, or national security purposes are fully excluded. This exclusion applies to the AI system itself — not the organisation. A defence contractor who builds AI systems for both military and commercial customers cannot claim the exclusion for the commercial systems.
Law Enforcement in Third Countries (Art.2(4)–(5))
International cooperation and law enforcement activities by Union institutions and bodies in certain contexts have limited exclusions. These are narrow and primarily relevant to EU institutions and member state law enforcement agencies.
Scientific Research and Development (Art.2(6))
AI systems developed and tested exclusively for scientific research and development purposes are excluded — but only until they are placed on the market or put into service. A research prototype that never leaves the lab is excluded. The moment a research institute deploys an AI system to support patient triage decisions in a hospital, even as a pilot, the exclusion ends and the full regulation applies.
This has significant implications for university and corporate research teams who build AI prototypes. The transition from research to deployment is a compliance trigger, not a future problem to address when the system is "mature."
Personal and Non-Professional Activities (Art.2(7))
AI used in the context of personal activities, or by natural persons in the course of a purely personal non-professional activity, is excluded. The key word is "purely." An AI system used by a sole trader to screen job applicants — even informally — is not a purely personal activity. The exclusion covers home-automation systems, personal finance tools, and personal AI assistants, not business activities conducted by individuals.
The Art.1 → Art.2 → Art.3 Foundation Chain
Before you can answer "is my system high-risk?" or "am I a provider or a deployer?", you must answer three foundation questions in sequence:
Art.1: Is this regulation relevant to what I'm building? (What is it for?)
↓
Art.2: Does this regulation apply to my specific situation? (Who does it cover?)
↓
Art.3(1): Does my software qualify as an "AI system" under the regulation? (What is an AI system?)
↓
Art.3(4)–(12): What is my role? (Provider, deployer, importer, distributor?)
↓
Art.5 / Annex III: What risk tier applies to my system?
Developers who skip steps one and two — going straight to Annex III — make systematic classification errors. The most common is applying the regulation to software that is technically outside its scope (traditional rule-based systems that do not meet the Art.3(1) "AI system" definition), while simultaneously failing to apply it to borderline systems that are inside the scope but not obviously so.
How Art.1 Objectives Guide Gray-Area Decisions
Art.1 is not just foundational theory. It has practical implications for the three most common gray-area questions developers face:
Gray Area 1: Is My System High-Risk Under Annex III?
Annex III lists high-risk AI system categories, but the descriptions contain interpretive flexibility. Art.1's objectives — particularly "ensuring a high level of protection of health, safety, fundamental rights" — push toward inclusive interpretation when the system could plausibly cause those kinds of harm.
A practical example: a recruitment AI tool that ranks candidates is listed in Annex III (Section 4 — employment). But what about a tool that merely filters CVs based on keyword matching to generate a shortlist for human review? A literal reading might exclude it. Art.1's fundamental rights objective (non-discrimination, right to work) pushes toward inclusion. Regulators applying Art.1's interpretive framework will classify keyword-filtering recruitment tools that affect hiring decisions as within Annex III, not outside it.
Gray Area 2: Does the Research Exclusion Apply to My Pilot?
Many enterprise AI teams run "pilots" that are operationally indistinguishable from production deployments — the output affects real decisions, real users, and real outcomes. Art.1's objective of "addressing adverse impacts of AI systems in the Union" applies to real adverse impacts, not to formally labelled production systems. A pilot that generates real outputs affecting real people is not a research prototype. Art.1 removes any comfort that informal labelling creates.
Gray Area 3: Is Voluntary Art.95 Compliance Worth It for My System?
Art.1 frames the regulation as promoting the "uptake of human-centric and trustworthy AI." Art.95 voluntary codes of conduct are the mechanism by which non-high-risk AI providers signal trustworthiness. Art.1's objective is directly served by Art.95 participation. This matters for procurement and insurance contexts: when contracting authorities or underwriters evaluate an AI vendor's compliance posture, they are implicitly applying Art.1's standard, and Art.95 code adherence is the most direct way to demonstrate alignment with it.
CLOUD Act Dimension: Art.1's Sovereignty Objective
Art.1 mentions "Union values" and the need to ensure AI operates "in accordance with Union values." Recitals 1–5 of the EU AI Act expand on this, framing the regulation as an expression of EU technological sovereignty and the EU's right to set standards for AI operating within its market.
This sovereignty framing is directly relevant to CLOUD Act conflict analysis. When an AI system's monitoring data, training data, or compliance documentation is stored on US cloud infrastructure (AWS, Azure, GCP), the US Clarifying Lawful Overseas Use of Data (CLOUD) Act creates a potential for US government access to that data through legal process directed at the cloud provider — regardless of where the data is physically located and regardless of EU data protection law.
For AI systems subject to the EU AI Act, the sovereignty objective in Art.1 creates an implicit preference for infrastructure arrangements that keep compliance-relevant data under EU jurisdiction. This is not yet a mandatory requirement — but it is an interpretive signal that regulators applying Art.1's Union-values framing may treat jurisdictional clarity as a compliance quality factor in enforcement decisions.
Practical CLOUD Act risk for Art.1-scoped AI systems:
| Infrastructure | CLOUD Act Exposure | Art.1 Alignment |
|---|---|---|
| US hyperscaler (AWS/Azure/GCP) | US government can compel disclosure of compliance data | Partial — depends on contractual safeguards |
| EU-headquartered cloud with no US parent | No CLOUD Act exposure | Full alignment with Union values objective |
| On-premises in EU member state | No CLOUD Act exposure | Full alignment |
Python: Art.1 Foundation Chain Classifier
from dataclasses import dataclass, field
from enum import Enum
from typing import Optional
class TerritorialTrigger(Enum):
PLACED_ON_MARKET = "placed_on_market"
PUT_INTO_SERVICE = "put_into_service"
USED_IN_UNION = "used_in_union"
NONE = "none"
class ScopeExclusion(Enum):
MILITARY_DEFENCE = "military_defence"
NATIONAL_SECURITY = "national_security"
SCIENTIFIC_RESEARCH_ONLY = "scientific_research_only"
PERSONAL_NON_PROFESSIONAL = "personal_non_professional"
LAW_ENFORCEMENT_THIRD_COUNTRY = "law_enforcement_third_country"
NONE = "none"
@dataclass
class Art1ScopeAssessment:
system_name: str
territorial_triggers: list[TerritorialTrigger] = field(default_factory=list)
exclusion: ScopeExclusion = ScopeExclusion.NONE
research_to_deployment_transition: bool = False
cloud_infrastructure: str = "unknown" # "eu_sovereign", "us_hyperscaler", "on_premises"
def is_in_scope(self) -> bool:
if self.exclusion != ScopeExclusion.NONE:
if self.exclusion == ScopeExclusion.SCIENTIFIC_RESEARCH_ONLY:
return self.research_to_deployment_transition
return False
return TerritorialTrigger.NONE not in self.territorial_triggers or len(self.territorial_triggers) > 1
def cloud_act_risk(self) -> str:
if self.cloud_infrastructure == "us_hyperscaler":
return "HIGH — compliance data accessible via CLOUD Act; consider EU-sovereign infrastructure"
elif self.cloud_infrastructure == "eu_sovereign":
return "LOW — single EU legal jurisdiction; aligns with Art.1 Union values objective"
elif self.cloud_infrastructure == "on_premises":
return "NONE — physical control in EU jurisdiction"
return "UNKNOWN — assess infrastructure jurisdiction before compliance planning"
def scope_summary(self) -> dict:
return {
"system": self.system_name,
"in_scope": self.is_in_scope(),
"triggers": [t.value for t in self.territorial_triggers],
"exclusion": self.exclusion.value,
"cloud_act_risk": self.cloud_act_risk(),
"next_step": (
"Proceed to Art.2 territorial scope analysis, then Art.3(1) AI system definition"
if self.is_in_scope()
else f"Excluded under {self.exclusion.value} — document exclusion basis"
),
}
# Example: Enterprise SaaS AI tool deployed to EU customers
assessment = Art1ScopeAssessment(
system_name="RecruitmentRankingAPI",
territorial_triggers=[TerritorialTrigger.USED_IN_UNION, TerritorialTrigger.PLACED_ON_MARKET],
exclusion=ScopeExclusion.NONE,
cloud_infrastructure="us_hyperscaler",
)
print(assessment.scope_summary())
# {'system': 'RecruitmentRankingAPI', 'in_scope': True, 'triggers': ['used_in_union', 'placed_on_market'],
# 'exclusion': 'none', 'cloud_act_risk': 'HIGH — ...', 'next_step': 'Proceed to Art.2 ...'}
20-Item Art.1 Foundation Checklist
Scope determination (Items 1–8)
- 1. Confirmed which Art.1(2) trigger applies: placed on market, put into service, or used in Union
- 2. Assessed whether the "used in Union" trigger applies to API calls or outputs affecting EU users
- 3. Documented the exclusion analysis: military, national security, research, or personal use
- 4. For research exclusions: confirmed the research-to-deployment transition point and compliance trigger
- 5. Verified that "personal" use exclusion is genuinely personal and non-professional, not sole-trader or freelance business use
- 6. Confirmed system meets Art.3(1) AI system definition before proceeding with compliance planning
- 7. Mapped the Art.1 → Art.2 → Art.3 foundation chain for each AI product or service
- 8. Identified all territorial triggers across the full product portfolio, not just primary product
Interpretive alignment (Items 9–14)
- 9. Reviewed Art.1 objectives (safety, fundamental rights, legal certainty) as interpretive guide for Annex III classification
- 10. For borderline high-risk classification: applied Art.1 fundamental rights lens to assess which direction uncertainty resolves
- 11. Documented how system design choices serve Art.1's "human-centric and trustworthy AI" objective
- 12. Identified which EU Charter rights are implicated by the AI system and documented intersection with AI Act obligations
- 13. Confirmed that any "pilot" or "beta" deployment affecting real users and real decisions is treated as full deployment for Art.1 purposes
- 14. Assessed whether Art.95 voluntary code adherence serves Art.1's uptake promotion objective and commercial interests
Infrastructure and sovereignty (Items 15–20)
- 15. Assessed cloud infrastructure jurisdiction: EU-sovereign, US hyperscaler, or on-premises
- 16. Documented CLOUD Act exposure for compliance-relevant data (training data, monitoring logs, technical documentation)
- 17. Evaluated contractual mechanisms (EU SCCs, data processing agreements) for US hyperscaler CLOUD Act mitigation
- 18. Identified whether infrastructure arrangement affects Art.94(2) monitoring compliance (AI Office access to compliance data)
- 19. Confirmed that Art.1's Union-values framing is incorporated into compliance documentation narrative
- 20. Planned Art.1 foundation chain documentation as the opening section of the technical compliance file
EU AI Act Article 1 is where compliance planning starts — not Annex III. The objectives Art.1 states determine how every ambiguous provision in the 113 articles that follow will be interpreted. Getting Art.1 right means every downstream decision is built on a correct foundation.
EU-sovereign infrastructure eliminates the CLOUD Act conflict that creates jurisdictional uncertainty for compliance programmes built on US cloud. Deploy on sota.io
See Also
- EU AI Act Art.2: Territorial Scope — Who the Regulation Applies To
- EU AI Act Art.3(1): 'AI System' Definition and the April 2026 Commission Guidelines
- EU AI Act Art.3(4)–(12): Provider, Deployer, Importer — Role Classification Guide
- EU AI Act Art.95: Codes of Conduct for Voluntary AI Compliance — Developer Guide