EU AI Act Art.103: Entry into Force and Application Dates — Compliance Timeline Developer Guide (2026)
Most EU regulations enter into force and apply simultaneously. The EU AI Act does not. Article 103 established a layered schedule: the Regulation entered into force in August 2024, but different provisions became applicable at different times — and not all obligations apply to all AI systems on the same date.
For developers and compliance teams, this tiered structure has direct consequences: the date a requirement becomes "applicable" is the date enforcement of that requirement can begin, not merely the date companies are expected to start thinking about it. As of April 2026, the general application deadline is four months away. Understanding what is already in force, what becomes enforceable in August 2026, and what Annex I product developers have until August 2027 to complete — is not academic. It is operational.
What Article 103 Actually Establishes
Article 103 contains two core elements:
1. Entry into Force: The Regulation entered into force on the twentieth day following its publication in the Official Journal of the European Union. The AI Act (Regulation (EU) 2024/1689) was published on 12 July 2024. Entry into force: 2 August 2024.
2. Application Schedule: Entry into force and applicability are different legal concepts. Entry into force means the Regulation exists as binding law. Applicability means the provisions create enforceable obligations. Article 103 establishes that the full Regulation becomes applicable on 2 August 2026 — 24 months after entry into force — with three earlier application dates for specific chapters.
The schedule in full:
| Date | Trigger | Provisions Applicable |
|---|---|---|
| 2024-08-02 | Entry into Force | Regulation exists as EU law. No chapter-level obligations yet enforceable. |
| 2025-02-02 | +6 months | Chapter I (General Provisions, Art.1-4) + Chapter II (Prohibited AI Practices, Art.5) |
| 2025-08-02 | +12 months | Chapter V (GPAI, Art.51-68) + Chapter VII (Governance) + penalty provisions |
| 2026-08-02 | +24 months | Full general application: all remaining provisions including Chapter III (high-risk AI) |
| 2027-08-02 | +36 months | Art.6(1) and corresponding obligations for Annex I product-embedded AI systems |
The Six Compliance Milestones in Depth
Milestone 1: 2024-08-02 — Entry into Force
Entry into force means the Regulation has been formally adopted, is binding EU law, and cannot be ignored as a draft. It does not mean enforcement begins. From 2 August 2024:
- The Regulation is a valid EU legal instrument
- Member States must begin transposing national competent authority designations (for the August 2026 deadline)
- The AI Office is being established within the Commission
- Companies should treat this as the start of their compliance planning period, not the start of their compliance obligation
What developers should have done from EIF: Gap analysis against the full Regulation; classification audit of AI systems being built or planned; identification of which application tier applies to each product. Companies that waited until 2025 or 2026 are now working under time pressure that proactive EIF-period analysis would have prevented.
Milestone 2: 2025-02-02 — Prohibited Practices and General Definitions Apply
Already in force: 14 months ago.
Chapter I establishes definitions that apply throughout the Regulation. Chapter II — Article 5 — establishes the eight categories of prohibited AI practices. From 2 February 2025, deploying any of these practices constitutes a violation of the Regulation that national authorities can investigate and fine under Art.99.
The eight prohibited practices from Art.5:
- Subliminal manipulation — AI that influences behavior through techniques below conscious perception
- Exploitation of vulnerabilities — AI that exploits age, disability, or socioeconomic vulnerability to distort decision-making
- Social scoring by public authorities — Government AI systems that evaluate citizens' trustworthiness based on social behavior and apply adverse treatment
- Real-time biometric identification in public spaces — Remote biometric identification systems used by law enforcement in real-time (with narrow exceptions)
- Emotion recognition in workplace/education — AI that infers emotions of workers or students (with limited exception)
- Biometric categorization by sensitive characteristics — AI that categorizes individuals by race, political opinion, religion, or sexual orientation based on biometrics
- Predictive policing — AI that predicts criminal behavior based on personality traits or profiling
- Untargeted facial image scraping — Building facial recognition databases by scraping the internet or CCTV footage
For developers: If any current product capabilities map to Art.5 prohibited practices, the organization has been in violation since February 2025. Art.99(3) provides for fines up to €35M or 7% of global annual turnover for prohibited practice violations — enforceable now.
Chapter I definitions (Art.3) are also binding from this date, meaning definitions of "AI system," "provider," "deployer," "intended purpose," "reasonably foreseeable misuse," and others now have their EU AI Act meanings for classification purposes.
Milestone 3: 2025-08-02 — GPAI and Governance Apply
Already in force: 8 months ago.
Three major sets of provisions became applicable on 2 August 2025:
Chapter V — General-Purpose AI Models (Art.51-68):
GPAI providers — companies that develop and place on the market foundation models — have been under binding obligation for eight months. Core GPAI obligations in force:
- Art.53 — GPAI provider obligations: Technical documentation (Annex XI/XII), provision of information to downstream providers, copyright compliance policy, training data summary publication
- Art.55 — Systemic risk obligations: Adversarial testing, incident reporting to AI Office, cybersecurity measures (applies only to GPAI models with systemic risk designation)
- Art.56 — Codes of practice: GPAI providers are expected to adhere to codes of practice developed under AI Office guidance
Chapter VII — Governance:
The European AI Office within the Commission, the European AI Board, the Advisory Forum, and the Scientific Panel on GPAI are operational. Member States have designated (or are obligated to designate) national competent authorities. This governance infrastructure is the enforcement machinery — it has been running since August 2025.
Penalty provisions applicable from August 2025:
Art.99-103 penalty provisions became applicable from August 2025. This means that since August 2025, the AI Office has enforcement authority over GPAI providers under Art.101, and Member States have enforcement authority over prohibited practice violations that began in February 2025. The administrative fine framework is live.
For developers building on GPAI models: The Art.53 information provision obligations apply to the GPAI model provider, not to you as a downstream developer. However, if you are building a GPAI-enabled product that you categorize as "not high-risk" and the GPAI model you use has not provided the information required by Art.53(1)(b), you may face classification challenges. Verify your GPAI model provider is compliant before your own 2026-08-02 deadline.
Milestone 4: 2026-08-02 — General Application (Four Months Away)
This is the deadline that matters for most AI developers. From 2 August 2026, all remaining provisions of the Regulation apply — including:
Chapter III — High-Risk AI Systems:
The full requirements of Art.8-15 apply to all high-risk AI systems:
- Art.9: Risk management system (documented, tested, continuously updated)
- Art.10: Training data governance (quality criteria, bias monitoring)
- Art.11: Technical documentation (Annex IV format)
- Art.12: Record-keeping and logging
- Art.13: Transparency and instructions for use
- Art.14: Human oversight measures
- Art.15: Accuracy, robustness, cybersecurity
Provider obligations (Art.16-20):
- Quality management system (Art.17)
- EU database registration (Art.49)
- Conformity assessment (Art.43)
- CE marking for Annex I product categories (Art.48)
Deployer obligations (Art.26-28):
- Human oversight implementation
- Post-market monitoring
- Incident reporting to MSA
What "applicable" means on 2026-08-02: Market surveillance authorities in all EU Member States become empowered to investigate high-risk AI systems for Chapter III compliance. Art.74 market surveillance powers activate. Art.88-94 investigation procedures can be initiated. Art.99 fines for Chapter III violations can be imposed.
The current compliance window: Companies that have been preparing for two years are in a strong position. Companies that treated the two-year period as "not our problem yet" have four months to complete what should have been a two-year programme. The practical reality is that conformity assessment, technical documentation, and quality management system implementation are not four-month projects for complex high-risk AI systems.
| Obligation | Typical Implementation Timeline | Status for Late Starters |
|---|---|---|
| Risk management system | 6-12 months | Behind |
| Technical documentation (Annex IV) | 3-6 months | Achievable if started now |
| Quality management system | 6-18 months | Behind |
| Conformity assessment | 3-12 months (depending on third-party requirement) | Marginal |
| EU database registration | 1-2 months (after docs complete) | Achievable |
Milestone 5: 2027-08-02 — Annex I Product-Embedded AI Systems
Art.6(1) and its corresponding obligations in the Regulation apply from 2 August 2027 for high-risk AI systems that are safety components of products covered by Annex I harmonised legislation.
Annex I product categories:
- Machinery (Machinery Regulation 2023/1230)
- Medical devices (MDR 2017/745, IVDR 2017/746)
- Toys (Toy Safety Directive)
- Recreational craft
- Lifts
- Equipment for explosive atmospheres
- Radio equipment
- Pressure vessels
- Cableway installations
Why Annex I products get 12 extra months:
AI systems embedded in physical products covered by product safety legislation face dual compliance: they must meet both the product-specific conformity requirements and the AI Act requirements. Notified Bodies under product safety legislation are not automatically qualified to conduct AI Act conformity assessments — the certification infrastructure requires development. The 36-month timeline recognises this institutional complexity.
Developer implication: If you are building AI capabilities that are safety components of Annex I products, your planning timeline extends to August 2027 — but not your preparation timeline. Notified Body engagement for AI Act + product safety combined assessments should be initiated now, since Notified Body capacity is finite and appointment lead times are significant.
The Transitional Provisions Interaction
Article 111 of the Regulation establishes transitional provisions for AI systems already placed on the market before the applicable dates:
For Annex III high-risk AI systems (general): AI systems already on the market before 2 August 2026 that have not been substantially modified may continue operating until 2027-08-02 while achieving compliance with Chapter III requirements. This is not an exemption — it is a structured transition period that ends.
For Annex I product-embedded AI systems: AI systems already in Annex I products before the applicable dates have a further extended transitional period.
"Substantial modification" resets the clock: If an already-deployed AI system undergoes a substantial modification — which can include significant changes to intended purpose, training data, or performance characteristics — the transitional period does not apply. The modified system is treated as a new placement on the market and must comply with the full Regulation at the time of redeployment.
For developers maintaining existing AI systems: Every product update should be evaluated against the "substantial modification" standard. A routine bug fix does not reset the clock; a redesign that changes the AI system's intended purpose or significantly alters its performance characteristics may.
CLOUD Act Intersection with Application Date Compliance
The Art.103 timeline creates a specific CLOUD Act risk pattern for compliance documentation:
Documentation created during the gap period (EIF to first application date): Companies that created compliance documentation on US cloud infrastructure before 2025-02-02 may face CLOUD Act compellability for that documentation in US enforcement contexts, even though EU enforcement authority did not yet apply to that documentation. Pre-compliance planning documents are not privileged from CLOUD Act production unless they are attorney-client privileged communications.
Documentation created for the 2026-08-02 deadline: If your technical documentation, conformity assessment reports, risk management records, and quality management system evidence is stored on US cloud infrastructure, US law enforcement can access it under CLOUD Act independently of the EU MSA investigation timeline. In a concurrent US-EU AI enforcement proceeding, your compliance documentation may reach US authorities through CLOUD Act before the EU investigation has formally opened.
The timeline-specific recommendation: Create and store your Art.9-15 compliance documentation on EU-sovereign infrastructure from the outset. The closer you get to 2026-08-02, the more valuable your compliance documents become — and the higher the risk of unintended disclosure through CLOUD Act channels if stored on US-controlled infrastructure.
Python Tooling for Application Date Tracking
from dataclasses import dataclass, field
from datetime import date
from typing import Optional
# EU AI Act application milestones
EU_AI_ACT_MILESTONES = {
"entry_into_force": date(2024, 8, 2),
"prohibited_practices": date(2025, 2, 2), # Chapter I + II
"gpai_governance": date(2025, 8, 2), # Chapter V + VII + penalties
"general_application": date(2026, 8, 2), # Full regulation
"annex_i_products": date(2027, 8, 2), # Art.6(1) Annex I products
}
@dataclass
class AISystemComplianceTimeline:
"""
Maps an AI system to its applicable Art.103 compliance deadlines.
Determines which milestones are already passed and which are upcoming.
"""
system_name: str
is_gpai_model: bool
is_annex_iii_high_risk: bool
is_annex_i_product_embedded: bool
already_on_market_before_2026: bool = False
substantially_modified_since_market: bool = False
def applicable_deadlines(self) -> dict:
today = date.today()
deadlines = {}
# Prohibited practices: applies to all AI systems
deadlines["prohibited_practices"] = {
"date": EU_AI_ACT_MILESTONES["prohibited_practices"],
"applies": True,
"already_passed": EU_AI_ACT_MILESTONES["prohibited_practices"] < today,
"status": "PAST - Art.5 violations enforceable since Feb 2025"
if EU_AI_ACT_MILESTONES["prohibited_practices"] < today
else "UPCOMING",
}
# GPAI obligations
if self.is_gpai_model:
deadlines["gpai_obligations"] = {
"date": EU_AI_ACT_MILESTONES["gpai_governance"],
"applies": True,
"already_passed": EU_AI_ACT_MILESTONES["gpai_governance"] < today,
"status": "PAST - Art.53/55 obligations enforceable since Aug 2025"
if EU_AI_ACT_MILESTONES["gpai_governance"] < today
else "UPCOMING",
}
# General application for high-risk AI
if self.is_annex_iii_high_risk:
# Transitional provision: already-on-market systems have until Aug 2027
# unless substantially modified
if self.already_on_market_before_2026 and not self.substantially_modified_since_market:
effective_deadline = date(2027, 8, 2)
note = "Transitional period applies (Art.111) — must comply by Aug 2027"
else:
effective_deadline = EU_AI_ACT_MILESTONES["general_application"]
note = "Full Chapter III compliance required from Aug 2026"
deadlines["high_risk_chapter_iii"] = {
"date": effective_deadline,
"applies": True,
"days_remaining": (effective_deadline - today).days if effective_deadline > today else 0,
"already_passed": effective_deadline < today,
"note": note,
}
# Annex I product-embedded AI
if self.is_annex_i_product_embedded:
deadlines["annex_i_product_deadline"] = {
"date": EU_AI_ACT_MILESTONES["annex_i_products"],
"applies": True,
"days_remaining": (EU_AI_ACT_MILESTONES["annex_i_products"] - today).days
if EU_AI_ACT_MILESTONES["annex_i_products"] > today else 0,
"note": "Art.6(1) extended deadline — dual compliance: AI Act + Annex I product safety",
}
return deadlines
def days_to_general_deadline(self) -> int:
today = date.today()
delta = EU_AI_ACT_MILESTONES["general_application"] - today
return max(0, delta.days)
def urgency_level(self) -> str:
days = self.days_to_general_deadline()
if self.is_annex_iii_high_risk and days <= 120:
return "CRITICAL — general application in < 4 months"
elif self.is_annex_iii_high_risk and days <= 270:
return "HIGH — 6-9 months to general application"
elif self.is_annex_iii_high_risk:
return "MEDIUM — begin compliance programme now"
elif self.is_gpai_model:
return "HIGH — GPAI obligations already applicable"
else:
return "STANDARD — monitor Art.5 for all AI systems"
@dataclass
class Art103CompliancePlanner:
"""
Organisation-level Art.103 compliance calendar.
Aggregates deadlines across multiple AI systems.
"""
organisation_name: str
ai_systems: list[AISystemComplianceTimeline] = field(default_factory=list)
def critical_near_term_actions(self) -> list[str]:
"""Returns prioritised actions for next 90 days."""
actions = []
today = date.today()
general_deadline = EU_AI_ACT_MILESTONES["general_application"]
days_to_general = (general_deadline - today).days
high_risk_systems = [s for s in self.ai_systems if s.is_annex_iii_high_risk]
gpai_systems = [s for s in self.ai_systems if s.is_gpai_model]
if gpai_systems:
actions.append(
f"OVERDUE: {len(gpai_systems)} GPAI system(s) — Art.53/55 obligations "
f"already applicable since Aug 2025. Audit immediately."
)
if high_risk_systems and days_to_general <= 120:
actions.append(
f"CRITICAL: {len(high_risk_systems)} high-risk system(s) — {days_to_general} days "
f"to Aug 2026 deadline. Conformity assessment must begin immediately."
)
# Check transitional provision eligibility
transitional_eligible = [
s for s in high_risk_systems
if s.already_on_market_before_2026 and not s.substantially_modified_since_market
]
if transitional_eligible:
actions.append(
f"PLANNING: {len(transitional_eligible)} system(s) may qualify for Art.111 "
f"transitional period until Aug 2027. Document market status before Aug 2026."
)
return actions
# Usage example
system = AISystemComplianceTimeline(
system_name="Customer Risk Scoring Engine",
is_gpai_model=False,
is_annex_iii_high_risk=True, # Annex III category 5(b): creditworthiness
is_annex_i_product_embedded=False,
already_on_market_before_2026=False,
)
print(f"Days to general deadline: {system.days_to_general_deadline()}")
print(f"Urgency: {system.urgency_level()}")
for name, deadline in system.applicable_deadlines().items():
print(f"\n{name}: {deadline}")
The Art.103 Compliance Timeline: Where Developers Are Now
As of April 2026, here is the practical status for each application milestone:
2025-02-02 (Prohibited Practices) — 14 months passed: Any Art.5 violation in a deployed product is an active compliance breach. MSAs can open investigations. Fines can be imposed. If an audit has not been conducted to verify no prohibited practices are deployed, that audit is overdue.
2025-08-02 (GPAI) — 8 months passed: GPAI providers should have technical documentation, training data summaries, and copyright compliance policies complete. The AI Office has enforcement authority. Codes of practice are being developed; participation in or monitoring of these codes is now operationally relevant.
2026-08-02 (General Application) — 4 months remaining: Four months is insufficient to build a compliant high-risk AI system from scratch. It is sufficient to complete documentation and conformity assessment for systems where technical development is already complete. The immediate priority for Annex III high-risk systems is: (1) complete technical documentation (Annex IV format), (2) complete conformity assessment (Art.43), (3) register in EU database (Art.49), (4) CE mark if required.
2027-08-02 (Annex I Products) — 16 months remaining: For product-embedded AI systems, 16 months is adequate for combined AI Act + product safety conformity assessment if Notified Body engagement begins immediately. Lead times for Notified Body assessment in some sectors (medical devices especially) can exceed 12 months.
The 30-Item Art.103 Application Dates Compliance Checklist
Entry into Force (2024-08-02) — Baseline:
- ☐ AI Act (Regulation (EU) 2024/1689) official text reviewed and accessible to compliance team
- ☐ AI system inventory created — all AI systems in development or deployed identified
- ☐ Classification analysis begun — which systems are high-risk (Annex III), general-purpose, Annex I-embedded
- ☐ Compliance owner designated for each AI system or product line
- ☐ Compliance programme timeline set based on Art.103 milestones
Prohibited Practices Deadline (2025-02-02) — Should Be Complete: 6. ☐ Art.5 prohibited practices audit completed for all deployed AI systems 7. ☐ No subliminal manipulation capabilities deployed 8. ☐ No vulnerability exploitation capabilities deployed 9. ☐ No social scoring by public authority AI deployed 10. ☐ Biometric identification systems reviewed for Art.5 prohibition scope 11. ☐ Emotion recognition in workplace/education reviewed 12. ☐ Art.3 definitions applied to classify AI systems — "AI system" defined, "provider" vs "deployer" clear
GPAI Deadline (2025-08-02) — Should Be Complete (if GPAI provider): 13. ☐ GPAI model identified as general-purpose AI model under Art.51 definition 14. ☐ Technical documentation (Annex XI/XII) prepared and maintained 15. ☐ Information provision to downstream providers implemented (Art.53(1)(b)) 16. ☐ Copyright compliance policy implemented (Art.53(1)(c)) 17. ☐ Training data summary published (Art.53(1)(d)) — public URL accessible 18. ☐ Codes of practice monitored / adherence pathway confirmed 19. ☐ Systemic risk designation assessment — is model ≥ 10^25 FLOPs threshold? 20. ☐ If systemic risk: adversarial testing plan, incident reporting pipeline, cybersecurity measures
General Application Deadline (2026-08-02) — Priority Now: 21. ☐ Risk management system (Art.9) documented and tested 22. ☐ Training data governance (Art.10) quality criteria and bias monitoring implemented 23. ☐ Technical documentation (Art.11, Annex IV format) complete 24. ☐ Logging and record-keeping (Art.12) implemented — automated log generation 25. ☐ Transparency documentation (Art.13) — instructions for use complete 26. ☐ Human oversight measures (Art.14) designed, tested, and documented 27. ☐ Conformity assessment (Art.43) completed — internal or third-party 28. ☐ EU database registration (Art.49) submitted 29. ☐ Quality management system (Art.17) operational 30. ☐ For Annex I products: Notified Body engaged, dual compliance (AI Act + product safety) assessment underway toward 2027-08-02 deadline
Four months to the general application deadline is not a comfort margin — it is the interval between completion and enforcement. Every obligation under Chapter III of the EU AI Act that has not been completed yet has a countdown timer. Art.103 tells you when that timer expires.
See Also
- EU AI Act Art.104: Exercise of the Delegation — Developer Guide — Art.104 governs the delegated act mechanism that can shift compliance obligations within the Art.103 timeline without amending it; the 3-month scrutiny window is shorter than most Art.103 deadlines
- EU AI Act Art.98: Delegated Acts Powers — Developer Guide — Art.98 lists which articles confer delegated powers; those powers can reshape the compliance obligations that Art.103 application dates govern
- EU AI Act Art.99: Penalties and Fines — Developer Guide — Art.99 enforcement powers activate on the Art.103 application dates; understanding when fines become possible is the operational purpose of Art.103