EU AI Act Art.113: Application Dates — When Does the AI Act Apply? Complete Timeline Developer Guide (2026)
Article 113 is the single most-searched provision in the EU AI Act — the article that answers "when does what apply?" It establishes four layered application dates, each activating different obligations for different AI system categories. This guide decodes the full Art.113 cascade and shows how to build your compliance roadmap from it.
The One Article Every Developer Must Know
If you read only one article in the EU AI Act, read Article 113.
While the regulation officially entered into force on August 1, 2024 (per Art.109), the application of its obligations follows a staggered four-phase schedule. Art.113 defines that schedule. Every compliance deadline, every "by when do we need this?" question in EU AI Act planning ultimately traces back to Art.113.
The structure is a cascade:
| Application Date | Months After EIF | What Becomes Applicable |
|---|---|---|
| February 2, 2025 | 6 months | Prohibited practices (Art.5) + AI literacy (Art.4) |
| August 2, 2025 | 12 months | GPAI models (Art.51–68) + Governance structures |
| August 2, 2026 | 24 months | Full application — default date for most obligations |
| August 2, 2027 | 36 months | Art.6(1) — Annex I product safety legislation AI |
Understanding this cascade is the foundation of every EU AI Act compliance roadmap.
Phase 1: February 2, 2025 — Prohibited Practices and AI Literacy
What Applied on February 2, 2025
Six months after the regulation's entry into force, the first wave of obligations became binding:
Art.5 — Prohibited AI Practices: All eight categories of prohibited AI systems became unlawful on February 2, 2025. These prohibitions apply immediately and without transition — there is no grace period, no Art.108 shelter, no delayed application for legacy systems. If your AI system uses:
- Subliminal manipulation techniques
- Exploitation of vulnerabilities (age, disability, social/economic situation)
- Social scoring by public authorities
- Real-time remote biometric identification in public spaces (with narrow exceptions)
- Emotion recognition in workplace or education contexts
- Biometric categorization by sensitive attributes
- Predictive policing based on profiling
- Untargeted facial recognition scraping
…it was required to be shut down or fundamentally redesigned by February 2, 2025.
Art.4 — AI Literacy: Providers and deployers must ensure their personnel have sufficient AI literacy by February 2, 2025. This is an ongoing obligation — not a one-time certification — covering technical understanding, operational context, and awareness of AI system capabilities and limitations.
Art.84(1) — First Evaluation Report: The Commission's obligation to submit an evaluation report on the implementation and impact of the regulation began from this date.
What Did NOT Apply in Phase 1
Everything else. The high-risk AI requirements (Art.6–51), the GPAI obligations (Art.51–68), the transparency requirements (Art.50), the conformity assessments, the registration database — none of these applied on February 2, 2025.
Common Mistake: Teams that saw the February 2025 date assumed the entire Act was in force. It was not. Only the prohibited practices prohibition and AI literacy requirements applied.
Phase 2: August 2, 2025 — GPAI Models and Governance Infrastructure
What Applied on August 2, 2025
Twelve months after entry into force, the GPAI (General Purpose AI) framework became operational:
Chapter V — GPAI Models (Art.51–68): Providers of general-purpose AI models — including large language models, multimodal models, and foundation models — became subject to binding obligations covering:
- Model evaluation and adversarial testing
- Technical documentation (Art.53)
- Copyright compliance and training data transparency
- Downstream provider information (Art.53(1)(b))
- Systemic risk assessment for models with systemic risk designation (> 10²⁵ FLOPs training compute, Art.51)
- Incident reporting for serious incidents (Art.62 for systemic risk models)
GPAI Code of Practice: The AI Office's GPAI Code of Practice became the primary voluntary compliance instrument for GPAI providers. Compliance with the Code creates a presumption of conformity with Chapter V.
Governance Structures:
- AI Office fully operational (Art.64–68)
- Scientific Panel established (Art.67)
- National competent authorities designated
- EU AI Act database operational (Art.71)
Notified Body Regime: The framework for conformity assessment bodies to begin notified body procedures under Art.43 became active.
What Did NOT Apply in Phase 2
High-risk AI obligations (Art.6–51 for Annex II/III systems) remained inapplicable until August 2026. A GPAI model provider subject to Chapter V did not need to meet Art.9–15 requirements (risk management, data governance, technical documentation for high-risk AI, logging, transparency, human oversight, accuracy/robustness) unless their model was itself a high-risk AI system — which triggers different analysis.
Phase 3: August 2, 2026 — Full Application (The Default Date)
The Core Date for Most AI Systems
August 2, 2026 is the date most compliance teams mean when they say "the EU AI Act." This is the default application date under Art.113(3) — all provisions of the regulation that are not covered by earlier or later dates apply from this point.
What Became Fully Applicable:
High-Risk AI Obligations (Art.6–51):
- Art.9: Risk management system requirement
- Art.10: Data and data governance
- Art.11: Technical documentation (Annex IV format)
- Art.12: Logging and record-keeping
- Art.13: Transparency and provision of information
- Art.14: Human oversight measures
- Art.15: Accuracy, robustness, cybersecurity
- Art.16: Provider obligations (QMS, registration, post-market monitoring)
- Art.43: Conformity assessment procedures
High-Risk AI Classification (Annex III): All eight categories in Annex III became fully operative as high-risk classifiers:
- Biometric identification and categorization
- Critical infrastructure management
- Education and vocational training
- Employment, worker management, self-employment
- Access to essential private/public services
- Law enforcement
- Migration, asylum, border control
- Administration of justice and democratic processes
Market Surveillance and Enforcement:
- National competent authorities with market surveillance powers (Art.70–97)
- Penalty regime fully operative: up to €35M / 7% global turnover for prohibited practices; up to €15M / 3% for other violations; up to €7.5M / 1.5% for incorrect information (Art.99)
Transparency Obligations for Certain AI Systems (Art.50): Systems generating synthetic content, emotion recognition, deep fakes, and certain chatbots must meet transparency requirements regardless of high-risk classification.
The Art.108 Interaction
Systems already placed on market or put into service before August 2, 2026 may qualify for Art.108 transitional provisions — grace periods extending compliance obligations to August 2, 2028 (Annex III) or August 2, 2029 (Annex I Machinery/MDR). The critical trigger: substantial modification eliminates the grace period regardless of when the system was originally deployed.
Phase 4: August 2, 2027 — Annex I Product Safety Legislation
The Sector-Specific Extension
Art.113(4) applies to a specific subset of high-risk AI: systems covered by Art.6(1) — AI systems that are safety components of products, or are products themselves, falling under the Union harmonization legislation listed in Annex I.
Annex I Legislation Categories:
- Machinery Regulation (EU) 2023/1230
- Medical Devices Regulation (MDR) 2017/745
- In Vitro Diagnostic Devices Regulation (IVDR) 2017/746
- Radio Equipment Directive 2014/53/EU
- General Product Safety Regulation
- Toy Safety Directive
- Civil Aviation regulation (EASA)
- Marine Equipment Directive
- Lifts Regulation
- Pressure Equipment Directive
For AI systems integrated into Annex I-regulated products, full AI Act compliance for the product-integrated AI is required from August 2, 2027 — giving these sectors an additional 12 months beyond the default August 2026 date.
Why the Extension? Annex I products already undergo complex conformity assessment under their sectoral legislation. Adding AI Act compliance on top requires coordination between notified bodies, updated standards from CEN/CENELEC, and alignment between the AI Act's requirements and existing CE marking procedures. The 36-month window acknowledges this complexity.
Digital Omnibus Impact on Art.113 Timelines
In 2025, the European Commission proposed the Digital Omnibus package, which includes modifications to the AI Act's application dates. As of this writing, the Digital Omnibus proposal is advancing through the legislative process but has not yet been adopted as final law.
Proposed Changes (Not Yet Final Law):
| System Category | Original Art.113 Date | Digital Omnibus Proposed Extension |
|---|---|---|
| Annex III High-Risk AI | August 2, 2026 | December 31, 2027 |
| Annex I Product Safety AI | August 2, 2027 | August 2, 2028 |
| GPAI Models | August 2, 2025 | Unchanged |
| Prohibited Practices | February 2, 2025 | Unchanged |
Developer Implication: If Digital Omnibus is adopted in its current form, Annex III high-risk AI systems would gain approximately 16 additional months of transition time. However, planning purely on the basis of proposed extensions is a compliance risk — the proposal could be modified or delayed. The prudent approach: plan for the original Art.113 dates and treat any Digital Omnibus extension as a risk buffer.
Determining Which Date Applies to Your System
Use this decision tree to determine which Art.113 application date governs your AI system:
Step 1 — Is your system a prohibited practice (Art.5)? → YES: February 2, 2025 already passed. You must have ceased operation or redesigned. → NO: Continue.
Step 2 — Is your system a GPAI model (Art.3(63) — general purpose AI model)? → YES: Chapter V obligations applied from August 2, 2025. → NO: Continue.
Step 3 — Is your system high-risk under Art.6(2) and Annex III? → YES: Default application date August 2, 2026. Art.108 may provide transitional relief if placed on market before this date. → NO: Continue.
Step 4 — Is your system high-risk under Art.6(1) — Annex I product/component? → YES: Extended application date August 2, 2027. → NO: Continue.
Step 5 — Does Art.50 transparency apply (chatbot, synthetic content, emotion recognition, deep fake)? → YES: August 2, 2026 application date. → NO: Your system is likely a limited-risk or minimal-risk AI system with no mandatory EU AI Act compliance obligations beyond Art.4 AI literacy (already applied February 2025).
Application Date Monitoring: What You Should Track
For a development team building EU AI Act compliance infrastructure, these are the Art.113 milestones to track:
Already Passed:
- ✅ February 2, 2025: Art.5 Prohibited Practices — ACTIVE
- ✅ February 2, 2025: Art.4 AI Literacy — ACTIVE
- ✅ August 2, 2025: Chapter V GPAI Models — ACTIVE
- ✅ August 2, 2025: AI Office fully operational — ACTIVE
Approaching (as of 2026-04-14):
- ⏳ August 2, 2026: Full Application — 110 days remaining
- ⏳ August 2, 2027: Annex I Product Safety AI — 475 days remaining
Compliance Implications for Current Development: Any Annex III high-risk AI system currently in development that has not yet been placed on market will need to meet full Art.9–51 requirements before deployment — there is no grace period for systems first placed on market after August 2, 2026.
Python Tooling for Art.113 Compliance Scheduling
from datetime import date, timedelta
from enum import Enum
class AISystemCategory(Enum):
PROHIBITED_PRACTICE = "prohibited_practice"
GPAI_MODEL = "gpai_model"
HIGH_RISK_ANNEX_III = "high_risk_annex_iii"
HIGH_RISK_ANNEX_I = "high_risk_annex_i"
TRANSPARENCY_ONLY = "transparency_only"
LIMITED_RISK = "limited_risk"
class Art113ApplicationDateSchedule:
"""Computes EU AI Act application dates per Art.113."""
# Art.109 Entry into Force
ENTRY_INTO_FORCE = date(2024, 8, 1)
# Art.113 application dates
PHASE_1_DATE = date(2025, 2, 2) # 6 months: Prohibited practices + AI literacy
PHASE_2_DATE = date(2025, 8, 2) # 12 months: GPAI + Governance
PHASE_3_DATE = date(2026, 8, 2) # 24 months: Full application (default)
PHASE_4_DATE = date(2027, 8, 2) # 36 months: Annex I product safety AI
# Digital Omnibus proposed extensions (NOT YET LAW)
DIGITAL_OMNIBUS_ANNEX_III = date(2027, 12, 31) # Proposed: Annex III extension
DIGITAL_OMNIBUS_ANNEX_I = date(2028, 8, 2) # Proposed: Annex I extension
def __init__(self, system_category: AISystemCategory,
placed_on_market: date = None,
consider_digital_omnibus: bool = False):
self.system_category = system_category
self.placed_on_market = placed_on_market
self.consider_digital_omnibus = consider_digital_omnibus
self.today = date.today()
def get_application_date(self) -> date:
"""Returns the primary Art.113 application date for this system category."""
if self.system_category == AISystemCategory.PROHIBITED_PRACTICE:
return self.PHASE_1_DATE
elif self.system_category == AISystemCategory.GPAI_MODEL:
return self.PHASE_2_DATE
elif self.system_category == AISystemCategory.HIGH_RISK_ANNEX_III:
if self.consider_digital_omnibus:
return self.DIGITAL_OMNIBUS_ANNEX_III
return self.PHASE_3_DATE
elif self.system_category == AISystemCategory.HIGH_RISK_ANNEX_I:
if self.consider_digital_omnibus:
return self.DIGITAL_OMNIBUS_ANNEX_I
return self.PHASE_4_DATE
else:
return self.PHASE_3_DATE
def days_remaining(self) -> int:
"""Days until application date from today."""
app_date = self.get_application_date()
delta = app_date - self.today
return max(0, delta.days)
def is_applicable_now(self) -> bool:
"""Is the regulation currently applicable to this system?"""
return self.today >= self.get_application_date()
def check_art108_eligibility(self) -> dict:
"""
Check if Art.108 transitional provisions could apply.
Returns eligibility assessment and grace period dates.
"""
if self.placed_on_market is None:
return {
"eligible": "UNKNOWN",
"reason": "Placed-on-market date not provided",
"grace_period_end": None
}
application_date = self.get_application_date()
if self.placed_on_market >= application_date:
return {
"eligible": False,
"reason": "System placed on market on or after application date — Art.108 grace period does not apply",
"grace_period_end": None
}
# System placed before application date — may qualify for Art.108
if self.system_category == AISystemCategory.HIGH_RISK_ANNEX_III:
grace_period_end = date(2028, 8, 2) # Art.108 + 2 years from Aug 2026
elif self.system_category == AISystemCategory.HIGH_RISK_ANNEX_I:
grace_period_end = date(2029, 8, 2) # Art.108 + 2 years from Aug 2027
else:
return {
"eligible": False,
"reason": f"Category {self.system_category.value} does not have Art.108 grace period",
"grace_period_end": None
}
return {
"eligible": True,
"reason": f"System placed on market before {application_date} — Art.108 transitional relief may apply until {grace_period_end}",
"grace_period_end": grace_period_end,
"warning": "SUBSTANTIAL MODIFICATION eliminates grace period regardless of this analysis"
}
def generate_compliance_roadmap(self) -> list:
"""Generate ordered compliance milestones."""
milestones = [
{
"date": self.PHASE_1_DATE,
"label": "Phase 1: Prohibited Practices + AI Literacy",
"status": "PASSED" if self.today > self.PHASE_1_DATE else "FUTURE",
"applies_to": "All AI systems — Art.5 prohibitions + Art.4 AI literacy"
},
{
"date": self.PHASE_2_DATE,
"label": "Phase 2: GPAI Models + Governance",
"status": "PASSED" if self.today > self.PHASE_2_DATE else "FUTURE",
"applies_to": "GPAI model providers — Chapter V (Art.51–68)"
},
{
"date": self.PHASE_3_DATE,
"label": "Phase 3: Full Application (Default)",
"status": "PASSED" if self.today > self.PHASE_3_DATE else f"{(self.PHASE_3_DATE - self.today).days} days remaining",
"applies_to": "Annex III high-risk AI + Transparency AI (Art.50) — most developers"
},
{
"date": self.PHASE_4_DATE,
"label": "Phase 4: Annex I Product Safety AI",
"status": "PASSED" if self.today > self.PHASE_4_DATE else f"{(self.PHASE_4_DATE - self.today).days} days remaining",
"applies_to": "Art.6(1) AI in Annex I regulated products (MDR, Machinery, etc.)"
},
]
return [m for m in milestones]
# Usage examples
def assess_system_compliance_status(
system_name: str,
category: AISystemCategory,
placed_on_market: date = None
) -> None:
schedule = Art113ApplicationDateSchedule(category, placed_on_market)
app_date = schedule.get_application_date()
days_left = schedule.days_remaining()
applicable = schedule.is_applicable_now()
print(f"\n=== {system_name} ===")
print(f"Category: {category.value}")
print(f"Application Date: {app_date}")
print(f"Status: {'APPLICABLE NOW' if applicable else f'{days_left} days remaining'}")
art108 = schedule.check_art108_eligibility()
print(f"Art.108 Eligibility: {art108['eligible']} — {art108['reason']}")
print("\nCompliance Roadmap:")
for milestone in schedule.generate_compliance_roadmap():
print(f" {milestone['date']}: {milestone['label']} [{milestone['status']}]")
# Example: High-risk AI in recruitment (Annex III category 4)
assess_system_compliance_status(
"Resume Screening AI",
AISystemCategory.HIGH_RISK_ANNEX_III,
placed_on_market=date(2025, 6, 1)
)
# Example: Large language model provider
assess_system_compliance_status(
"Foundation LLM",
AISystemCategory.GPAI_MODEL
)
# Example: Medical device AI
assess_system_compliance_status(
"Diagnostic Imaging AI (MDR Class IIb)",
AISystemCategory.HIGH_RISK_ANNEX_I,
placed_on_market=date(2024, 3, 15)
)
Art.113 in the Context of the Full EU AI Act Structure
Art.113 does not stand alone — it references provisions across the regulation. Understanding the cross-references helps compliance teams identify which obligations cluster around each application date:
February 2025 cluster (Art.113(1)):
- Art.4 (AI literacy)
- Art.5 (prohibited practices)
- Art.84 (evaluation report timeline)
August 2025 cluster (Art.113(2)):
- Chapter III Section 4 (Notified bodies — Art.28–39)
- Chapter V (GPAI models — Art.51–68)
- Art.69 (codes of conduct for non-high-risk AI)
- Art.3 definitions insofar as needed for above chapters
- Chapter VI (Governance — AI Office, national authorities)
- Art.97 (exercise of delegation)
August 2026 cluster (Art.113(3) — default): Everything not covered by earlier or later specific dates, including:
- Chapter III Sections 1–3 (high-risk AI classification and obligations — Art.6–27)
- Chapter IV (Transparency for certain AI systems — Art.50)
- Chapter VII (EU database — Art.71)
- Chapter VIII (Post-market monitoring and serious incident reporting — Art.72–77)
- Chapter IX (Market surveillance — Art.78–88)
- Chapter X (Codes of conduct — Art.95–96)
- Art.99 (penalties)
August 2027 cluster (Art.113(4)):
- Art.6(1) high-risk AI as defined with reference to Annex I
30-Item Art.113 Application Date Readiness Checklist
Application Date Classification (6 items)
- 1. Identified whether any component of your AI system falls under Art.5 prohibited practices (Phase 1 — already active)
- 2. Determined whether your system qualifies as a GPAI model under Art.3(63) (Phase 2 — already active)
- 3. Identified Annex III categories applicable to your system (Phase 3 — August 2026)
- 4. Identified Annex I legislation applicable to your system (Phase 4 — August 2027)
- 5. Applied the Art.113 decision tree to determine primary application date for each system component
- 6. Documented Digital Omnibus proposal status and whether proposed extensions affect your planning assumptions
Phase 1 Compliance (February 2025) (5 items)
- 7. Verified no system components use prohibited subliminal manipulation techniques
- 8. Confirmed no real-time remote biometric identification in public spaces (or lawful exception documented)
- 9. Implemented Art.4 AI literacy training for all personnel deploying or operating AI systems
- 10. Confirmed no social scoring system operated for public authorities
- 11. Documented emotion recognition and biometric categorization audit results
Phase 2 Compliance (August 2025) (5 items)
- 12. Determined GPAI applicability — does system constitute a general-purpose AI model under Art.3(63)?
- 13. If GPAI: Technical documentation per Art.53 prepared and maintained
- 14. If GPAI with systemic risk: Adversarial testing protocols established
- 15. If GPAI: Downstream provider information obligations met (Art.53(1)(b))
- 16. Confirmed registration in EU AI database if required for GPAI model
Phase 3 Readiness (August 2026) (8 items)
- 17. Annex III classification analysis completed and documented
- 18. If Annex III high-risk: Risk management system (Art.9) designed and implemented
- 19. If Annex III high-risk: Data governance procedures (Art.10) documented
- 20. If Annex III high-risk: Technical documentation (Art.11, Annex IV format) complete
- 21. If Annex III high-risk: Logging and audit trail (Art.12) implemented
- 22. If Annex III high-risk: Conformity assessment pathway (Art.43) identified and initiated
- 23. If Annex III high-risk: EU database registration (Art.71) completed or scheduled
- 24. Art.50 transparency obligations assessed for chatbots, synthetic content, emotion recognition
Art.108 Transitional Analysis (4 items)
- 25. Identified systems placed on market before August 2026 that may qualify for Art.108 transitional relief
- 26. Documented placed-on-market dates for all potentially eligible legacy systems
- 27. Established substantial modification monitoring protocol to protect transitional status
- 28. Calculated Art.108 grace period end dates (Annex III → August 2028, Annex I → August 2029)
Phase 4 Readiness (August 2027) (2 items)
- 29. Identified all AI components integrated into Annex I-regulated products and their Art.113(4) timeline
- 30. Initiated coordination between AI Act conformity assessment and sectoral (MDR/Machinery) notified body procedures
Key Takeaways
-
Art.113 establishes four dates, not one. The "EU AI Act deadline" is a simplification. Prohibited practices applied February 2025, GPAI August 2025, full high-risk AI August 2026, and Annex I product safety AI August 2027.
-
August 2, 2026 is the default deadline for most developers. If your AI system is a high-risk Annex III application — recruitment tool, biometric system, critical infrastructure AI, law enforcement decision support — this is your primary compliance date.
-
Phase 1 (February 2025) is already active and non-negotiable. There is no transitional provision for prohibited practices. These applied from the start.
-
GPAI providers have been in scope since August 2025. If you deploy or fine-tune a large foundation model, GPAI obligations are already binding.
-
Digital Omnibus may extend Annex III dates to December 2027 — but as proposed law only. Plan for August 2026 and treat any extension as contingency buffer.
-
Art.108 grace periods depend on placed-on-market date relative to Art.113 dates. The interaction between Art.113 and Art.108 determines whether your existing system needs immediate compliance or benefits from transitional relief.
Related: EU AI Act Art.5 — Prohibited AI Practices | EU AI Act Art.108 — Transitional Provisions | EU AI Act Art.109 — Entry into Force