EU AI Act Art.108: Transitional Provisions — Legacy AI Systems Compliance Timeline Developer Guide (2026)
Not every AI system in production today was built with the EU AI Act in mind. Many high-risk AI systems — diagnostic algorithms, credit scoring models, recruitment tools, critical infrastructure monitoring systems — were developed, validated, and deployed before the EU AI Act entered into force in August 2024. These systems face a fundamental question: do they need to comply with the AI Act immediately when its provisions become applicable, or do they get additional time?
Article 108 answers that question. It establishes transitional provisions that allow high-risk AI systems already lawfully placed on the market or put into service before the AI Act's application dates to continue operating without full AI Act compliance for a defined grace period. But Art.108 comes with a critical condition that every developer, operator, and compliance team must understand: if a system undergoes a substantial modification during the grace period, the transitional protection ends immediately and full AI Act compliance becomes mandatory.
The practical stakes: For organisations with legacy AI systems in high-risk categories, Art.108 can mean the difference between a compliance deadline of 2026 and 2028. But misjudging what constitutes a substantial modification — or failing to document the system's pre-application-date status — can eliminate that two-year window without warning.
What Art.108 Covers
Article 108 provides transitional provisions for two distinct categories of AI systems:
Category 1 — High-risk AI systems not covered by Annex I (harmonization legislation) products: Systems that fall under Annex III of the AI Act (specific high-risk application areas), but are not embedded in or used as part of products covered by EU harmonization legislation. These include AI systems used in:
- Education and vocational training assessment
- Employment and worker management
- Access to and enjoyment of essential services
- Law enforcement applications
- Migration, asylum, and border control
- Administration of justice
- Democratic processes
For these systems, Art.108 provides that systems already placed on the market or put into service before 2 August 2026 (the main application date for high-risk AI) have until 2 August 2028 to come into full AI Act compliance — a two-year grace period from the main application date.
Category 2 — High-risk AI systems embedded in Annex I products: AI systems that are safety components of products covered by EU harmonization legislation (MDR, Machinery Regulation, RED, Vehicle Type-Approval, etc.). These were already subject to sector-specific compliance frameworks before the AI Act. Their transitional timeline is longer, reflecting the complexity of adapting existing conformity assessment frameworks. Art.108, in combination with Art.107 and the Digital Omnibus adjustments, provides extended timelines for these systems.
Category 3 — General-purpose AI models (GPAI): Art.108 also addresses GPAI models already released before 2 August 2025 (the GPAI application date). Providers of such models had until 2 August 2026 to bring their models into compliance with the GPAI obligations under Art.53 and Art.55. This transitional provision is particularly relevant for providers of foundation models and large language models that were already available when the regulation took effect.
The Timeline Architecture
Understanding Art.108 requires mapping multiple overlapping timelines:
EU AI Act Application Dates (as originally enacted):
- 2 February 2025: Prohibited practices (Art.5) — no transitional provision, immediate application
- 2 August 2025: GPAI model obligations (Chapter V), governance, and penalties for GPAI
- 2 August 2026: Main application date for high-risk AI systems (Annex III), providers/deployers' obligations, market surveillance
- 2 August 2027: Annex I products with existing conformity assessment obligations
- 2 August 2030: Annex I products covered by EU legislation listed in Annex I, point 4 (certain machinery)
Art.108 Grace Periods (from these dates):
- Annex III high-risk AI systems placed on market before 2 August 2026: grace until 2 August 2028
- Annex I product high-risk AI placed on market before applicable Annex I date: grace until the relevant Annex I date + 2 years
- GPAI models released before 2 August 2025: grace until 2 August 2026
Digital Omnibus Interaction: The EU AI Act Digital Omnibus package (covering the Digital Omnibus Act and related instruments) adjusted some of these dates. Art.108 should now be read alongside the extended Annex III application dates. Some Annex III categories that were due to apply in August 2026 have been extended to December 2027. The grace period logic in Art.108 still applies relative to these extended dates — a system placed on market before the extended applicable date benefits from the corresponding grace period. See our coverage of the Digital Omnibus deadline extension for current applicable dates.
The Substantial Modification Rule
The most operationally critical element of Art.108 is the substantial modification exception. This is where the grace period can be lost, often unexpectedly.
The rule: A high-risk AI system that undergoes a substantial modification after the AI Act's application date — even if it was previously covered by a grace period — is treated as a new system from that point forward. It loses its transitional protection and must immediately comply with all applicable AI Act requirements.
Why this matters: Development teams maintaining legacy AI systems may inadvertently trigger the substantial modification threshold through normal improvement cycles. A model retrained on expanded data, an algorithm redesigned to address fairness concerns, a system expanded to cover new use cases — any of these could qualify as a substantial modification, eliminating years of transitional runway without a conscious compliance decision.
What Qualifies as Substantial Modification
The EU AI Act defines substantial modification in Art.3(23): a change to a high-risk AI system after its placing on the market or putting into service that affects the compliance of the AI system with the requirements of Title III Chapter 2 (the core technical requirements), or that results in a modification to the intended purpose for which the AI system has been assessed.
Changes that typically constitute substantial modification:
-
Changes to intended purpose: Any expansion or change to the categories of users the system serves, the domains in which it operates, or the decisions it is designed to support or make. A credit scoring model expanded from consumer lending to mortgage approval has changed intended purpose. A recruitment screening tool repurposed for performance management has changed intended purpose.
-
Architecture changes that affect safety or performance characteristics: Changes to the underlying model architecture — switching from a logistic regression to a neural network, from a transformer to an ensemble, from rule-based logic to learned representations — that materially alter how the system generates outputs. Not every architectural change qualifies; the test is whether the change affects the system's compliance profile under the Art.9-15 requirements.
-
Training data changes that affect output characteristics: Retraining on significantly expanded datasets, retraining on data from new demographic groups not previously represented, or retraining that materially changes the system's output distribution. The focus is on changes that affect the risk profile or bias characteristics of the system.
-
New high-risk functionality: Adding capabilities that were not present in the originally assessed system, particularly if those capabilities create new risk vectors. Adding natural language explanation of decisions to a previously unexplainable system, or adding autonomous action capability to a previously advisory system.
-
Integration with new data sources or systems: Connecting the AI system to new data pipelines that affect its inputs in ways that change its risk profile or accuracy characteristics.
Changes that typically do not constitute substantial modification:
-
Performance tuning without architectural change: Hyperparameter adjustments, quantization for inference efficiency, pruning for latency reduction — changes that do not materially alter the system's decision-making logic or output characteristics.
-
Security and vulnerability patches: Bug fixes, security patches, and corrections of technical vulnerabilities in the system infrastructure — provided these do not change the AI model's reasoning or outputs.
-
Incremental dataset additions: Adding new training examples from the same distribution as the original training data, without expanding the intended population or changing the output distribution in ways that affect accuracy, fairness, or reliability characteristics.
-
UI/UX changes: Changes to how outputs are presented to users, without changing the underlying AI system's outputs. Adding a confidence score display, changing the presentation of recommendations, or modifying the interface through which users interact with AI-generated results.
-
Infrastructure migrations: Moving the system from one cloud provider to another, from on-premises to cloud, or between hosting environments — provided the AI system itself is unchanged.
The Substantial Modification Assessment
Because the line between ordinary updates and substantial modification can be contested, Art.108 requires that providers and deployers make an assessment of whether a proposed change crosses the threshold. This assessment needs to be documented.
The assessment framework:
Step 1 — Intended purpose check: Does the proposed change alter the intended purpose for which the system was assessed? Use the Art.3(12) definition: intended purpose includes the specific objectives the system is designed to achieve, the persons or classes of persons it is designed for, and the contexts or domains in which it is to be used. If any of these change, substantial modification is likely.
Step 2 — Art.9 risk management impact: Does the change introduce new risks or alter the risk profile documented in the Art.9 risk management system? If the change requires the risk management documentation to be materially updated — not just updated to reflect a new minor risk — this indicates substantial modification.
Step 3 — Art.10 training data impact: Does the change affect the data governance documentation in a way that indicates a new system, rather than a continuation of the existing one? Changes to the intended population, the data sources, or the distribution of training data that materially alter the system's characteristics are markers of substantial modification.
Step 4 — Art.13 transparency impact: Does the change alter what users need to know about the system's capabilities, limitations, or decision-making logic in material ways? If the transparency documentation needs to be rewritten rather than amended, this suggests substantial modification.
Step 5 — Art.14 human oversight impact: Does the change affect the mechanisms through which humans can understand, monitor, or override the system's outputs? Removing or reducing human oversight capabilities is likely a substantial modification.
Documentation Strategy During the Grace Period
Even though Art.108 provides a grace period from full AI Act compliance, this does not mean legacy systems require no documentation during the transitional period. Several documentation obligations apply during the grace period, and the foundation built during this period determines how efficiently the system can achieve full compliance when the grace period ends.
Documentation to establish transitional status:
-
Placement on market / putting into service date: Clear evidence that the system was placed on the market or put into service before the applicable application date. This is the foundation of the grace period claim. Evidence may include contracts, delivery documentation, commercial invoices, implementation records, or regulatory filings (e.g., MDR certification dates for Annex I systems).
-
Intended purpose documentation as of pre-application date: A documented description of the system's intended purpose as it existed before the application date. This serves as the baseline against which any future changes are assessed for substantial modification.
-
Modification log: A running record of all changes made to the system during the grace period, with a brief assessment of whether each change constitutes substantial modification. This log is not required by Art.108 explicitly, but is critical for defending transitional status if it is challenged by market surveillance authorities.
-
Change management protocol: Internal procedures that route proposed changes through a substantial modification assessment before implementation. Without this, development teams may inadvertently implement changes that trigger the substantial modification threshold without the compliance team being aware.
Documentation to prepare for end of grace period:
-
Gap analysis against Art.9-15: A systematic comparison of the system's current technical characteristics, documentation, and governance practices against the Art.9-15 requirements. This analysis identifies what work needs to be done before the grace period ends.
-
Conformity assessment pathway identification: Determining whether the system will require third-party conformity assessment (Art.43(2)) or can proceed via internal review. This depends on whether the system falls under categories requiring mandatory third-party assessment.
-
Technical documentation pre-draft: Beginning to structure the Art.11 technical documentation and Art.72 post-market monitoring plan before the grace period ends, using the grace period for preparation rather than starting from scratch when the deadline arrives.
GPAI Model Transitional Provisions
General-purpose AI models have a distinct transitional framework under Art.108 that reflects the different nature of GPAI obligations compared to high-risk AI requirements.
The GPAI transitional window: Providers of GPAI models that were already available before 2 August 2025 (when GPAI obligations became applicable) had a 12-month grace period — until 2 August 2026 — to bring their models into compliance with:
- Art.53 obligations (for all GPAI model providers): technical documentation, transparency toward downstream providers, copyright policy compliance
- Art.55 obligations (for systemic risk models): adversarial testing, incident reporting, cybersecurity measures, serious incident reporting
What the GPAI grace period required: During the grace period, GPAI model providers needed to:
- Assess whether their models qualified as GPAI models under Art.3(63)
- Assess whether their models qualified as GPAI models with systemic risk under Art.51 (≥10^25 FLOPs training compute threshold, or Commission designation)
- Begin preparing the Art.53 technical documentation and transparency measures
- For systemic risk models: begin implementing the Art.55 safety evaluation and adversarial testing framework
Post-2 August 2026: All GPAI model providers — including those who previously benefited from the transitional window — are now fully subject to GPAI obligations. The transitional window has closed.
Annex I Systems — Extended Transitional Framework
For high-risk AI systems embedded in products covered by EU harmonization legislation (Annex I to the AI Act), the transitional framework is more complex because it must interoperate with the conformity assessment frameworks of those sector-specific regulations.
The core challenge: Annex I products (medical devices, machinery, radio equipment, vehicles) already have conformity assessment frameworks that take years to complete. A medical device that received MDR certification in 2023 cannot simply be expected to undergo a completely separate AI Act conformity assessment by 2026 — the notified body capacity, the documentation standards, and the conformity assessment procedures needed time to mature.
Art.108 response: For Annex I systems, the transitional framework provides:
- Systems already certified under their sector-specific regulation before the relevant AI Act application date can continue to operate under their existing certification framework during the grace period
- The grace period for Annex I systems extends to 2027-2029 depending on the specific Annex I regulation
- Substantial modification to the AI component triggers immediate AI Act compliance obligations, just as for Annex III systems
Practical significance: Medical device manufacturers with AI-based devices that hold MDR certification need to plan their AI Act compliance to align with their next scheduled conformity assessment review, modification of the device, or the end of the Annex I transitional period — whichever comes first. The AI Act conformity assessment does not need to precede their MDR review; it needs to be complete before the applicable transitional deadline.
The Art.108 + Art.6(1) Interaction
For systems that fall under the Art.6(1) high-risk pathway (AI systems that are safety components of Annex I products, or are Annex I products themselves, and require third-party conformity assessment), there is an important interaction with Art.108 worth understanding explicitly.
Under Art.6(1), a system is high-risk because it undergoes third-party conformity assessment under Annex I legislation. The transitional provision allows such systems to continue operating under their Annex I conformity assessment alone during the grace period — without separately satisfying AI Act requirements — provided they were placed on market before the applicable date.
When the grace period ends (or when substantial modification occurs), these systems face a dual compliance challenge: they must satisfy both the updated Annex I sector regulation requirements AND the AI Act Art.8-15 requirements. The practical path to managing this dual obligation is the integrated conformity assessment approach described under Art.107 — where a single notified body performs a combined assessment against both frameworks, producing a single technical documentation package that satisfies both regulators.
Python Tooling — Art108TransitionalTracker
from dataclasses import dataclass
from datetime import date, timedelta
from typing import Optional
from enum import Enum
class AISystemCategory(Enum):
ANNEX_III_HIGH_RISK = "annex_iii"
ANNEX_I_PRODUCT = "annex_i"
GPAI_STANDARD = "gpai_standard"
GPAI_SYSTEMIC_RISK = "gpai_systemic_risk"
@dataclass
class Art108TransitionalStatus:
"""Transitional status assessment for a legacy AI system under Art.108"""
system_name: str
category: AISystemCategory
placed_on_market_date: date
has_grace_period: bool
grace_period_end: Optional[date]
grace_period_days_remaining: Optional[int]
grace_period_valid: bool
substantial_modification_risk: str # LOW / MEDIUM / HIGH
compliance_actions: list[str]
def assess_art108_status(
system_name: str,
category: AISystemCategory,
placed_on_market_date: date,
last_modification_date: Optional[date] = None,
modification_description: Optional[str] = None,
today: Optional[date] = None,
) -> Art108TransitionalStatus:
"""
Assess Art.108 transitional status for a legacy AI system.
Args:
system_name: Name/identifier of the AI system
category: AI system category under EU AI Act
placed_on_market_date: Date the system was placed on market or put into service
last_modification_date: Date of most recent significant modification (if any)
modification_description: Description of the modification (for assessment)
today: Reference date (defaults to current date)
Returns:
Art108TransitionalStatus with grace period and compliance assessment
"""
if today is None:
today = date.today()
# Application dates by category
application_dates = {
AISystemCategory.ANNEX_III_HIGH_RISK: date(2026, 8, 2),
AISystemCategory.ANNEX_I_PRODUCT: date(2027, 8, 2),
AISystemCategory.GPAI_STANDARD: date(2025, 8, 2),
AISystemCategory.GPAI_SYSTEMIC_RISK: date(2025, 8, 2),
}
# Grace period end dates (2 years from application date for most)
grace_period_ends = {
AISystemCategory.ANNEX_III_HIGH_RISK: date(2028, 8, 2),
AISystemCategory.ANNEX_I_PRODUCT: date(2029, 8, 2),
AISystemCategory.GPAI_STANDARD: date(2026, 8, 2),
AISystemCategory.GPAI_SYSTEMIC_RISK: date(2026, 8, 2),
}
application_date = application_dates[category]
grace_end = grace_period_ends[category]
# Check if system was placed on market before application date
has_grace_period = placed_on_market_date < application_date
if not has_grace_period:
# System placed on market after application date — no transitional protection
return Art108TransitionalStatus(
system_name=system_name,
category=category,
placed_on_market_date=placed_on_market_date,
has_grace_period=False,
grace_period_end=None,
grace_period_days_remaining=None,
grace_period_valid=False,
substantial_modification_risk="N/A",
compliance_actions=[
"IMMEDIATE: System placed on market after application date",
"Full AI Act compliance required from day of placement",
"Begin Art.9-15 compliance immediately",
"Identify conformity assessment pathway (Art.43)",
"Prepare technical documentation (Art.11 + Annex IV)",
]
)
# Assess substantial modification risk
modification_risk = "LOW"
grace_valid = today < grace_end
compliance_actions = []
if last_modification_date and last_modification_date > application_date:
if modification_description:
desc_lower = modification_description.lower()
high_risk_keywords = [
"intended purpose", "new use case", "new user group", "new domain",
"architecture change", "model replacement", "retrained", "new data source",
"expanded scope", "new functionality", "capability added"
]
medium_risk_keywords = [
"training data", "dataset update", "fine-tuning", "performance improvement",
"accuracy improvement", "output change", "new feature"
]
if any(kw in desc_lower for kw in high_risk_keywords):
modification_risk = "HIGH"
elif any(kw in desc_lower for kw in medium_risk_keywords):
modification_risk = "MEDIUM"
if modification_risk == "HIGH":
grace_valid = False
compliance_actions.extend([
"CRITICAL: Modification may constitute substantial modification under Art.3(23)",
"Immediate substantial modification assessment required",
"If confirmed substantial: grace period eliminated, full compliance required now",
"Document modification decision with legal and compliance review",
])
elif modification_risk == "MEDIUM":
compliance_actions.extend([
"WARNING: Modification requires substantial modification assessment",
"Document assessment outcome before further modifications",
"Consider Art.3(23) analysis with AI Act counsel",
])
# Calculate days remaining
days_remaining = None
if grace_valid:
days_remaining = (grace_end - today).days
if days_remaining < 180:
compliance_actions.extend([
f"URGENT: Grace period ends in {days_remaining} days ({grace_end})",
"Begin Art.9 risk management system now",
"Begin Art.11 technical documentation now",
"Identify Art.43 conformity assessment pathway",
"Engage notified body if third-party assessment required",
])
elif days_remaining < 365:
compliance_actions.extend([
f"ATTENTION: {days_remaining} days until grace period ends ({grace_end})",
"Begin gap analysis against Art.9-15 requirements",
"Begin technical documentation preparation",
"Plan conformity assessment process",
])
else:
compliance_actions.extend([
f"Grace period: {days_remaining} days remaining (until {grace_end})",
"Document baseline intended purpose and system characteristics",
"Implement modification change management protocol",
"Begin long-term compliance roadmap planning",
])
else:
compliance_actions.insert(0, "CRITICAL: Grace period has ended or is invalid — full compliance required")
compliance_actions.extend([
"Initiate Art.9 risk management system immediately",
"Begin technical documentation (Art.11 + Annex IV)",
"Identify conformity assessment pathway",
"Register in EU database (Art.71) if applicable",
])
return Art108TransitionalStatus(
system_name=system_name,
category=category,
placed_on_market_date=placed_on_market_date,
has_grace_period=True,
grace_period_end=grace_end,
grace_period_days_remaining=days_remaining,
grace_period_valid=grace_valid,
substantial_modification_risk=modification_risk,
compliance_actions=compliance_actions,
)
def check_substantial_modification(
change_description: str,
changes_intended_purpose: bool,
changes_risk_profile: bool,
changes_training_data_distribution: bool,
adds_new_capabilities: bool,
changes_output_characteristics: bool,
) -> dict:
"""
Assess whether a proposed change constitutes a substantial modification under Art.3(23).
Returns assessment with determination and supporting rationale.
"""
risk_factors = []
if changes_intended_purpose:
risk_factors.append(("DEFINITIVE", "Change to intended purpose — Art.3(12) — automatic substantial modification"))
if changes_risk_profile:
risk_factors.append(("HIGH", "Material change to risk profile documented in Art.9 risk management system"))
if changes_training_data_distribution:
risk_factors.append(("HIGH", "Training data distribution change — may affect accuracy, fairness, and Art.10 compliance"))
if adds_new_capabilities:
risk_factors.append(("HIGH", "New capabilities added — assess whether they introduce new risk vectors"))
if changes_output_characteristics:
risk_factors.append(("MEDIUM", "Output characteristics change — assess materiality of change"))
definitive_factors = [f for f in risk_factors if f[0] == "DEFINITIVE"]
high_factors = [f for f in risk_factors if f[0] == "HIGH"]
medium_factors = [f for f in risk_factors if f[0] == "MEDIUM"]
if definitive_factors:
determination = "SUBSTANTIAL MODIFICATION — CONFIRMED"
recommendation = "Full AI Act compliance required immediately. Grace period protection eliminated."
elif len(high_factors) >= 2:
determination = "SUBSTANTIAL MODIFICATION — LIKELY"
recommendation = "Obtain legal opinion before proceeding. Treat as substantial modification pending assessment."
elif high_factors:
determination = "REQUIRES ASSESSMENT"
recommendation = "Conduct formal Art.3(23) assessment. Document reasoning. Consider legal counsel."
elif medium_factors:
determination = "LIKELY NOT SUBSTANTIAL"
recommendation = "Document assessment rationale. Monitor cumulative effect of incremental changes."
else:
determination = "NOT SUBSTANTIAL MODIFICATION"
recommendation = "Document assessment. Maintain modification log for audit trail."
return {
"change_description": change_description,
"determination": determination,
"recommendation": recommendation,
"risk_factors": risk_factors,
"factor_count": {
"definitive": len(definitive_factors),
"high": len(high_factors),
"medium": len(medium_factors),
}
}
30-Item Art.108 Transitional Readiness Checklist
Grace Period Eligibility (Items 1-8)
-
Placement date documented — Written evidence that the system was placed on market or put into service before the applicable application date (contract, delivery record, commercial documentation, regulatory filing)
-
Application date identified — Correctly identified the relevant application date for this system's category (Annex III: 2 August 2026; Annex I product: sector-specific date; GPAI: 2 August 2025)
-
Grace period end date calculated — Correctly calculated when the grace period ends for this category (Annex III: 2 August 2028; Annex I: 2 August 2029; GPAI: 2 August 2026)
-
Digital Omnibus adjustments applied — Verified current applicable dates after Digital Omnibus deadline extensions — Annex III extension to December 2027 in certain categories changes grace period calculations
-
Intended purpose baseline documented — Documented description of the system's intended purpose as of the pre-application date — baseline for future substantial modification assessments
-
System characteristics baseline documented — Technical baseline record of model architecture, training data characteristics, and output characteristics as of the pre-application date
-
Grace period status communicated — Internal stakeholders (development, product, legal, compliance) are aware of the grace period timeline and its conditions
-
Regulatory status recorded — System registered in internal asset inventory with transitional status, grace period end date, and compliance owner assigned
Substantial Modification Prevention (Items 9-16)
-
Change management protocol implemented — All proposed changes to the AI system are routed through a substantial modification assessment before implementation
-
Substantial modification assessment template — A structured template for assessing each proposed change against Art.3(23) criteria, with sign-off from compliance and legal
-
Intended purpose change gate — Explicit review gate that triggers if any proposed change affects intended purpose, users, or domains — automatic substantial modification escalation
-
Training data change assessment — Process for assessing whether proposed training data changes affect the output distribution or risk profile in ways that constitute substantial modification
-
Architecture change assessment — Process for assessing whether proposed model architecture changes affect compliance characteristics under Art.9-15
-
Cumulative change monitoring — Process for tracking the cumulative effect of incremental changes that individually do not constitute substantial modification but collectively might
-
Modification log maintained — Running log of all changes made during the grace period, including the substantial modification assessment outcome for each change
-
Substantial modification escalation path — Clear escalation path if a proposed change is assessed as likely constituting substantial modification — legal review, compliance review, senior decision-maker sign-off
Grace Period Compliance Preparation (Items 17-24)
-
Gap analysis completed — Systematic comparison of current system documentation and governance practices against Art.9-15 requirements
-
Art.9 risk management gap identified — Specific gaps between current risk management practices and Art.9 requirements documented and prioritised
-
Art.10 training data gap identified — Specific gaps between current data governance and Art.10 training data requirements documented
-
Art.11 technical documentation gap identified — Assessment of documentation required by Art.11 + Annex IV that does not yet exist or needs significant enhancement
-
Art.13 transparency gap identified — Assessment of transparency documentation and user information obligations under Art.13
-
Art.14 human oversight gap identified — Assessment of human oversight mechanisms required by Art.14 and whether current system design supports them
-
Art.43 conformity assessment pathway identified — Determination of whether this system requires third-party conformity assessment (Art.43(2)) or can proceed via internal control (Art.43(1))
-
Compliance milestone plan created — Project plan with milestones for achieving full AI Act compliance before grace period end, working backward from the deadline
GPAI and Annex I Specific (Items 25-30)
-
GPAI classification assessed — For AI systems that may be used as GPAI models downstream: assessed whether they qualify as GPAI models under Art.3(63) and whether GPAI transitional provisions apply
-
GPAI systemic risk threshold assessed — For GPAI model providers: assessed whether the model meets the Art.51(1)(a) systemic risk threshold (≥10^25 FLOPs) or is subject to Commission designation
-
Annex I sector regulation alignment confirmed — For Annex I product AI systems: identified the applicable sector regulation, the existing conformity assessment status, and how the extended Annex I transitional timeline interacts with the sector-specific conformity assessment schedule
-
Notified body engagement planned — For Annex I systems: initiated contact with the relevant notified body to discuss how Art.107 dual conformity assessment integration applies to the next conformity assessment review
-
CLOUD Act documentation storage assessed — Documentation generated during the grace period (intended purpose records, modification logs, gap analyses) assessed for CLOUD Act exposure and stored appropriately (EU-sovereign storage recommended)
-
Post-grace-period registration planned — Identified whether the system requires EU database registration under Art.71 upon achieving full compliance, and planned the registration process to coincide with conformity assessment completion
Common Art.108 Misconceptions
Misconception 1: The grace period means no obligations until 2028
The grace period exempts systems from the Art.8-15 technical requirements and the Art.43 conformity assessment obligation during the transitional period. It does not exempt systems from all obligations. In particular:
- Systems must still comply with Art.5 prohibited practices (these applied from 2 February 2025 with no transitional provision)
- Systems must not be substantially modified without triggering full compliance
- Systems must not be actively deceiving users about their AI nature (transparency toward users obligation)
- Deployers of these systems still have obligations under Art.26 that are not affected by the provider's transitional status
Misconception 2: Any update is a substantial modification
Ordinary maintenance, security patches, and performance optimisation that do not change the system's intended purpose, risk profile, or output characteristics are not substantial modifications. The EU AI Act was drafted to allow responsible ongoing system maintenance without triggering compliance resets for every minor update.
Misconception 3: Substantial modification assessment is only needed if making major changes
Substantial modification risk is cumulative. A series of individually minor changes to training data, model architecture, and use case scope can collectively constitute a substantial modification even if no single change crossed the threshold. Modification logs and periodic cumulative assessments are important even for teams making only incremental changes.
Misconception 4: The grace period automatically extends if the project gets delayed
The grace period end date is fixed regardless of whether the organisation has made progress on compliance. If a system's grace period ends on 2 August 2028 and the organisation has not completed conformity assessment, the system is out of compliance from that date. Compliance planning must account for realistic conformity assessment timelines, including notified body lead times which can be 12-18 months for complex systems.
Art.108 in the Final EU AI Act Chapter Architecture
Article 108 belongs to Chapter XII (Final Provisions) alongside Art.107 (Amendments to Other EU Legislation), Art.109 (Entry into Force), and Art.113 (Application). Understanding how these articles fit together clarifies the temporal architecture of the entire regulation:
- Art.107 establishes the permanent integration with sector-specific EU law
- Art.108 establishes the transitional path for legacy systems into that integrated framework
- Art.109 establishes when the regulation enters into force (1 August 2024)
- Art.113 establishes the application dates from which obligations become binding
Art.108 is the bridge between Art.109 (when the regulation became law) and Art.113 (when it became binding) for the subset of AI systems that existed before the binding dates. Without Art.108, the AI Act would have required immediate compliance from all existing systems on the same day it became applicable — an impossible standard for systems built and validated before the regulation was even finalised.
For developers maintaining AI systems that fall under the Art.108 transitional provisions, the regulation's final provisions represent not just an end-of-document formality but the practical framework within which legacy compliance planning must operate.
This post is part of the sota.io EU AI Act Developer Guide series covering all 113 articles of Regulation (EU) 2024/1689. For the Art.107 cross-regulatory integration guide, see our coverage of amendments to other EU legislation. For the Digital Omnibus deadline extension impact on these transitional timelines, see our breaking analysis of the Annex III and Annex I date adjustments.