EU AI Act Art.106: Evaluation and Review — Commission Report Cycle Developer Guide (2026)
The EU AI Act entered into force on 1 August 2024. By 2 August 2029 — five years later — the European Commission must submit a report to the European Parliament and the Council evaluating the entire Regulation and, where appropriate, make proposals for amendments. This cycle repeats every four years. Article 106 is the mechanism by which the EU AI Act corrects itself.
For most compliance frameworks, evaluation clauses are boilerplate — they exist but rarely matter. The EU AI Act evaluation under Art.106 is different because it is explicitly required to assess the appropriateness of the very rules that carry the heaviest compliance burden: the Art.5 prohibited practices, the Annex III high-risk AI list, the GPAI systemic risk threshold, and the penalty tiers in Art.99. If those assessments conclude that the current rules are too broad, too narrow, or disproportionate, the evaluation report is the direct pathway to regulatory change.
The practical implication: Your 2024–2029 compliance investments should be calibrated against the knowledge that the first evaluation report arrives in August 2029. Rules that seem permanent today — the 10^25 FLOP GPAI threshold, the Annex III category list, the €35M prohibited AI fine ceiling — are all explicitly in scope for review. This is not a reason to delay compliance; it is a reason to design compliance programmes that can adapt rather than programmes that assume the current rules are final.
What Article 106 Actually Requires
Article 106 establishes a structured evaluation and review obligation for the Commission. The core elements are:
1. Evaluation timeline. The Commission must produce its first evaluation report by 2 August 2029. Subsequent evaluations follow every four years — meaning 2033, 2037, and so on. The four-year cycle is calibrated to give the regulation time to mature before assessment while not waiting so long that significant problems go unaddressed.
2. Mandatory report to Parliament and Council. The evaluation is not internal. The Commission must submit the report to the European Parliament and to the Council of the European Union. Both institutions receive the same document, which creates transparency across all three legislative actors.
3. Legislative proposals where appropriate. Where the evaluation identifies that the Regulation needs updating, the Commission "shall, where appropriate, make proposals for amendments." This phrasing is standard Commission language — it preserves Commission discretion on whether to propose, but the evaluation creates political accountability: if the Commission finds problems and does not propose fixes, Parliament or Council can use the report to push for action.
4. Evaluation scope. The evaluation must cover a defined set of topics. Understanding what Art.106 explicitly requires the Commission to assess tells you which current rules are most likely to change based on the evaluation cycle.
What the Evaluation Must Cover
The Art.106 evaluation is not a general policy review — it has specific required content. The areas that are explicitly in scope are the most developer-relevant parts of the regulation.
Fundamental Rights and Safety Impact
The evaluation must assess the impact of the Regulation on fundamental rights, rule of law, and democratic institutions, as well as on safety and security. This framing reflects the dual purpose of the AI Act: protecting against AI risks while preserving the values and institutions that form the basis of EU society.
For developers, this assessment has an asymmetric risk profile. If AI systems that are currently excluded from high-risk classification (or not covered by the Regulation at all) are found to have significant fundamental rights impacts, the evaluation can be the trigger for expanding Annex III or Art.5. Conversely, if evidence shows that certain Annex III categories carry low actual risk, the evaluation provides the pathway for de-listing.
Art.5 Prohibited Practices
The prohibited AI practices in Art.5 are the most absolute obligations in the AI Act — they carry the highest penalty tier (€35M or 7% of global annual turnover) and admit of no derogations. The Art.106 evaluation is explicitly required to assess whether the prohibited practices remain appropriate and whether they should be extended or restricted.
Current Art.5 prohibited practices in scope for evaluation:
- Real-time remote biometric identification in public spaces (Art.5(1)(h)) — with narrow law enforcement exceptions
- Social scoring by public authorities (Art.5(1)(c))
- Subliminal manipulation techniques (Art.5(1)(a))
- Emotion recognition in workplace and education settings (Art.5(1)(f))
- Untargeted scraping for facial recognition databases (Art.5(1)(e))
- Exploitation of vulnerability (Art.5(1)(b))
- Predictive policing based solely on profiling (Art.5(1)(d))
- Biometric categorisation for sensitive characteristics (Art.5(1)(g))
Developer implication: Any of these prohibitions could be extended to adjacent practices in the 2029 evaluation. If you are building systems in the broad vicinity of biometrics, emotion detection, behavioural prediction, or social scoring — even if your current system does not fall within Art.5 — the evaluation is a structural risk horizon to track. Maintaining documentation of why your system falls outside each Art.5 category is a minimum; updating that analysis against the evaluation report is prudent compliance practice.
The evaluation could also recommend relaxing or removing prohibitions if evidence shows they are not achieving their fundamental rights protection goals. More likely, given the political dynamic in the European Parliament, extensions rather than contractions are probable. But this is an empirical question that the evaluation is designed to answer.
Annex III High-Risk AI Classification
The Annex III list of high-risk AI application areas is the most operationally significant part of the Regulation for most developers. It determines whether you face the full Art.8–15 compliance regime — quality management systems, technical documentation, conformity assessment, EU database registration, human oversight obligations, and ongoing post-market monitoring.
Art.106 evaluates whether Annex III is still appropriate. The evaluation assesses:
- Whether the eight current categories (biometric systems, critical infrastructure, education, employment, essential services, law enforcement, migration, administration of justice) remain the right list
- Whether new AI application areas have emerged since 2024 that carry comparable risk to listed categories
- Whether the AI-specific risk criteria in Art.7 — which govern delegated act expansion — are being applied correctly
This matters because delegated act expansion under Art.104 and Art.7(1) can happen faster than the Art.106 evaluation cycle — delegated acts can add Annex III entries with 3 months' notice. But the evaluation provides a broader review: if entire new domains (not just specific use cases) need to be added, a legislative amendment following the Art.106 evaluation cycle is the pathway.
Developer implication for Annex III planning: The 2029 evaluation will assess five years of experience with the current Annex III list. AI systems you deploy in 2025–2026 in adjacent areas that are not currently listed may be subject to high-risk classification from 2030 onward if the evaluation recommends expansion. Designing AI systems to be conformity-assessment-ready — even when not yet required — is worth the architectural investment if your application area is adjacent to current Annex III categories.
GPAI Systemic Risk Threshold
Article 51(3) allows the Commission to adjust the general-purpose AI model systemic risk threshold (currently 10^25 floating-point operations of training compute) via delegated act under Art.104. But Art.106 requires the Commission to evaluate whether the GPAI provisions as a whole — including the systemic risk designation mechanism — remain appropriate.
The current 10^25 FLOP threshold was calibrated to frontier models as of the 2022–2023 drafting period. As compute efficiency improves through algorithmic advances — techniques like mixture of experts, distillation, and inference-time scaling — models with capabilities comparable to today's systemic risk models may be trainable at significantly lower compute. Conversely, if the threshold was set too low and is capturing models that do not actually pose systemic risk, the evaluation can recommend raising it.
Current GPAI systemic risk implications under Art.55:
- Adversarial testing (red-teaming) obligations
- Incident reporting to the AI Office
- Cybersecurity measures for model weights
- Energy efficiency reporting
The 2029 evaluation will occur at a point when the AI Office has five years of experience designating models under Art.51(4) (implementing acts) and enforcing Art.55 obligations. If the designation mechanism has produced over- or under-inclusive results, the evaluation creates the basis for correction.
Developer implication: If you are building or deploying large-scale AI models at compute levels approaching 10^25 FLOP, the first evaluation in 2029 is a critical horizon. The threshold could move in either direction. Planning your model architecture and training compute budget against the possibility of threshold revision — rather than assuming the current 10^25 FLOP line is permanent — is part of GPAI compliance resilience.
Penalty Tiers
Article 99 establishes three penalty tiers:
- Art.99(2): Up to €35,000,000 or 7% of total worldwide annual turnover for prohibited AI practices
- Art.99(3): Up to €15,000,000 or 3% for high-risk AI violations and most other infringements
- Art.99(4): Up to €7,500,000 or 1.5% for incorrect or misleading information to supervisory authorities
The evaluation under Art.106 must assess whether these penalty levels remain appropriate — specifically whether they are effective, proportionate, and dissuasive. Five years of enforcement experience will provide evidence on whether the current tiers are achieving their deterrence goals or whether they are over-penalising certain categories of infringement.
What could change: The percentage-of-turnover caps are calibrated to create proportionate penalties for large corporations while maintaining absolute floor amounts for small companies. If enforcement shows that the floors are too low to deter wealthy startups or that the percentage caps are creating disproportionate liability for SMEs, the evaluation can recommend revision. The evaluation is also specifically required to assess SME impact — suggesting that SME-specific penalty adjustments are a foreseeable evaluation output.
SME and Startup Impact
The Art.106 evaluation explicitly requires assessment of the impact on the competitiveness of EU AI companies and on SMEs. This is not general language — it reflects specific political commitments made during the AI Act negotiation to reduce compliance burden on smaller players.
What constitutes unacceptable burden on SMEs is not defined ex ante. The evaluation will assess actual compliance costs reported across the first five years of the Regulation. If those costs are found to be disproportionate, the Commission has both the evaluation report and the delegated act mechanism (Art.104) to implement relief.
Developer implication: If you are an SME (fewer than 250 employees, turnover under €50M) and you are incurring significant AI Act compliance costs, documenting those costs systematically serves a dual purpose: it demonstrates compliance seriousness to regulators and it provides data that can inform the Art.106 evaluation. National competent authorities submit inputs to the evaluation; SME representative bodies also feed into the process. Well-documented compliance cost data carries weight.
General-Purpose AI Developments
The GPAI provisions in Chapter V (Art.51–68) were among the last sections finalised in AI Act negotiations and represent the least mature regulatory framework in the Regulation. The evaluation is required to specifically assess whether Chapter V remains appropriate given developments in general-purpose AI technology.
This includes assessment of:
- Whether the code of practice mechanism (Art.56) is functioning as intended
- Whether the AI Office's (Art.64) enforcement powers are sufficient
- Whether the transparency obligations for GPAI models (Art.53) are producing useful information for downstream developers
- Whether the systemic risk designation mechanism needs refinement
The AI Office's first annual report under Art.97 will provide early evidence for all of these questions. By 2029, five years of AI Office operation will give the evaluation substantial empirical basis for GPAI-specific recommendations.
The Evaluation Process: How It Works in Practice
Understanding how evaluations turn into regulation changes helps developers calibrate when to act on evaluation signals versus when to wait.
Step 1: Commission Preparation (2027–2029)
The Commission does not wait until 2029 to begin preparing the evaluation. Preparatory work typically starts 18–24 months before the deadline. This includes:
- Requesting Member State reports on enforcement experience
- Commissioning external studies (typically from JRC or academic partners)
- Consulting with the European AI Board (Art.65) and AI Office (Art.64)
- Public stakeholder consultation (legally required under Better Regulation guidelines)
Developer action: Watch for public consultations on the AI Act evaluation beginning around 2027. These consultations are the primary mechanism for industry input. Your responses, particularly if they document specific compliance challenges with evidence, directly inform the evaluation report.
Step 2: Evaluation Report Publication (by 2 August 2029)
The evaluation report is published in the Official Journal and submitted to Parliament and Council simultaneously. The report is public and will receive significant attention from AI industry associations, law firms, and compliance teams globally.
Developer action: Read the evaluation report when published. The report will explicitly identify which provisions are under consideration for change and the Commission's preliminary position. This is typically 12–18 months before any legislative proposal.
Step 3: Legislative Proposal (If Needed)
Where the evaluation recommends legislative amendment — as opposed to administrative adjustments that can be made via delegated act — the Commission publishes a formal legislative proposal. This follows the standard EU legislative procedure: Commission proposal → Parliament amendments → Council position → trilogue → adoption.
For significant amendments (changes to Art.5 prohibited practices, major Annex III revision, new chapters), the legislative cycle typically takes 2–3 years from Commission proposal to entry into force. This means legislative changes recommended in the 2029 evaluation report are unlikely to enter into force before 2031–2032.
Developer action: A Commission evaluation report recommending legislative amendment is not an immediate compliance obligation. It is a 2–4 year regulatory change horizon. Update your long-term compliance roadmap when a recommendation is published, not when it is proposed.
Step 4: Delegated Act Fast-Track
For adjustments within the Commission's existing delegated act powers (expanding Annex III under Art.7, adjusting the GPAI threshold under Art.51(3)), the Commission can act more quickly — 3 months from notification under Art.104. Evaluation findings that fall within existing delegated act scope can translate into binding obligations within one year of the evaluation report.
Developer action: Distinguish between evaluation recommendations that require legislative amendment (slow path, 2–4 years) and those that can be implemented via delegated act (fast path, 3–12 months). The distinction is: does the recommended change require amending the Regulation text itself, or does it fall within powers already delegated to the Commission?
Evaluation vs Annual Commission Report Under Art.97
Art.106 evaluation (every 4 years) should not be confused with the annual Commission report required under Art.97. These are separate obligations:
| Dimension | Art.97 Annual Report | Art.106 Evaluation |
|---|---|---|
| Frequency | Annual | Every 4 years (first: Aug 2029) |
| Scope | AI Act implementation status | Whether the Regulation needs amending |
| Output | Status report | Proposals for amendment where appropriate |
| Addressee | EP + Council | EP + Council |
| Legislative effect | None directly | Can trigger proposals |
| AI Office role | AI Office reports on GPAI | Inputs to Commission evaluation |
The Art.97 annual reports will be important leading indicators for the Art.106 evaluation. Patterns in annual reports — repeated enforcement failures, systematic compliance cost data, documented GPAI threshold strain — will likely shape the evaluation's conclusions.
Member State Inputs and National Variation
The Art.106 evaluation requires Member States to provide information to the Commission. Member States have direct enforcement experience that the Commission lacks — national market surveillance authorities (MSAs) conduct inspections, impose fines, and observe compliance behaviour in their jurisdictions. This creates 27 potential evidence sources for the evaluation.
What Member State inputs cover:
- Number and types of AI Act investigations initiated
- Penalties imposed (amount, category of infringement, type of operator)
- Challenges in cross-border enforcement under Art.81 (mutual assistance)
- SME compliance cost data from national support programmes
- Technical challenges in conformity assessment of specific AI system categories
Developer implication: Member State enforcement patterns visible to you — inspections in your jurisdiction, industry guidance from national MSAs, voluntary compliance feedback — are the same data that feeds the national inputs to the Art.106 evaluation. If your national MSA has identified a compliance practice that it considers inadequate, that view will likely appear in the evaluation input.
CLOUD Act Intersection
The evaluation process generates documentation — your consultation responses, compliance cost analyses, internal assessments of how the regulation has affected your AI development practices. If this documentation is stored on US cloud infrastructure, it is potentially compellable under the US CLOUD Act (Clarifying Lawful Overseas Use of Data Act).
Evaluation-specific CLOUD Act exposure:
- Public consultation responses are published — no cloud exposure issue
- Internal evaluation-preparation documents (compliance cost analyses, regulatory risk assessments, legal opinions on which rules affect your business) are private — cloud exposure if US-hosted
- Communications with external legal counsel about AI Act evaluation strategy — potentially subject to US legal process if stored on US-governed infrastructure
- Detailed technical documentation prepared for evaluation submission — contains AI system specifications
Specific risk scenario: A developer preparing a submission to the Commission's AI Act evaluation public consultation typically prepares internal documentation supporting that submission — technical analyses, cost estimates, legal interpretations. If that documentation is stored on AWS, Azure, or Google Cloud with US-governed accounts, it is accessible to US law enforcement through CLOUD Act orders without notification to the developer or the Commission.
Mitigation: Store AI Act evaluation documentation — internal analyses, legal opinions, compliance cost data — on EU-sovereign infrastructure. This is particularly relevant for documentation that you do not intend to make public (as public submissions are already in the open domain).
Python Tooling for Art.106 Tracking
from dataclasses import dataclass, field
from datetime import date, timedelta
from enum import Enum
from typing import Optional
class EvaluationStatus(Enum):
UPCOMING = "upcoming"
PREPARATION = "preparation_phase"
CONSULTATION = "public_consultation"
DRAFTING = "report_drafting"
PUBLISHED = "published"
IMPLEMENTING = "implementing_recommendations"
class ChangePathway(Enum):
LEGISLATIVE = "full_legislative_procedure" # 2-4 years
DELEGATED_ACT = "delegated_act_art104" # 3-12 months
IMPLEMENTING_ACT = "implementing_act_art105" # weeks to months
NO_CHANGE = "no_change_recommended"
@dataclass
class EvaluationCycle:
cycle_number: int
report_deadline: date
preparation_start: date
consultation_start: Optional[date]
publication_date: Optional[date]
status: EvaluationStatus
@property
def days_to_deadline(self) -> int:
return (self.report_deadline - date.today()).days
@property
def urgency(self) -> str:
days = self.days_to_deadline
if days < 0:
return "PAST"
elif days < 365:
return "CRITICAL" # within 1 year
elif days < 730:
return "HIGH" # 1-2 years
elif days < 1095:
return "MEDIUM" # 2-3 years
else:
return "LOW" # 3+ years
@dataclass
class ProvisionReviewRisk:
provision: str # e.g., "Art.5(1)(f) emotion recognition"
current_rule: str
review_probability: str # "HIGH", "MEDIUM", "LOW"
change_direction: str # "expansion", "contraction", "threshold_adjustment"
change_pathway: ChangePathway
developer_action_if_changed: str
monitoring_trigger: str # what to watch for
@dataclass
class Art106ComplianceProfile:
organisation_name: str
evaluation_cycles: list[EvaluationCycle] = field(default_factory=list)
provision_risks: list[ProvisionReviewRisk] = field(default_factory=list)
consultation_responses: list[dict] = field(default_factory=list)
sme_status: bool = False
def add_standard_cycles(self):
"""Add the standard Art.106 evaluation cycle dates."""
first = date(2024, 8, 1)
for i, year_offset in enumerate([5, 9, 13, 17]):
deadline = date(first.year + year_offset, first.month, first.day + 1)
prep_start = date(deadline.year - 2, deadline.month, deadline.day)
self.evaluation_cycles.append(EvaluationCycle(
cycle_number=i + 1,
report_deadline=deadline,
preparation_start=prep_start,
consultation_start=None,
publication_date=None,
status=EvaluationStatus.UPCOMING
))
def next_evaluation(self) -> Optional[EvaluationCycle]:
future = [c for c in self.evaluation_cycles if c.days_to_deadline > 0]
return min(future, key=lambda c: c.days_to_deadline) if future else None
def high_risk_provisions(self) -> list[ProvisionReviewRisk]:
return [r for r in self.provision_risks if r.review_probability == "HIGH"]
def generate_report(self) -> str:
lines = [f"=== Art.106 Evaluation Tracker: {self.organisation_name} ===\n"]
next_eval = self.next_evaluation()
if next_eval:
lines.append(f"Next evaluation: {next_eval.report_deadline} ({next_eval.days_to_deadline}d, {next_eval.urgency})")
lines.append(f" Status: {next_eval.status.value}")
lines.append(f" Preparation expected from: {next_eval.preparation_start}\n")
high_risk = self.high_risk_provisions()
if high_risk:
lines.append(f"HIGH RISK provisions ({len(high_risk)} items):")
for r in high_risk:
lines.append(f" - {r.provision}")
lines.append(f" Direction: {r.change_direction} via {r.change_pathway.value}")
lines.append(f" Action if changed: {r.developer_action_if_changed}")
lines.append(f" Watch for: {r.monitoring_trigger}")
return "\n".join(lines)
def build_eu_ai_act_evaluation_profile(
org_name: str,
uses_biometrics: bool = False,
has_gpai_model: bool = False,
gpai_flops: Optional[float] = None,
annex_iii_categories: Optional[list[str]] = None,
is_sme: bool = False
) -> Art106ComplianceProfile:
"""Build an Art.106 evaluation risk profile for a specific organisation."""
profile = Art106ComplianceProfile(
organisation_name=org_name,
sme_status=is_sme
)
profile.add_standard_cycles()
# Standard provision risks for all developers
profile.provision_risks.append(ProvisionReviewRisk(
provision="Art.5 Prohibited Practices",
current_rule="8 categories of absolutely prohibited AI uses",
review_probability="HIGH",
change_direction="expansion to adjacent practices",
change_pathway=ChangePathway.LEGISLATIVE,
developer_action_if_changed="Audit systems in adjacent areas (biometrics, profiling, emotion detection)",
monitoring_trigger="Commission public consultation on Art.5 scope (expected 2027-2028)"
))
profile.provision_risks.append(ProvisionReviewRisk(
provision="Annex III High-Risk Categories",
current_rule="8 high-risk application areas",
review_probability="HIGH",
change_direction="expansion to new application areas",
change_pathway=ChangePathway.DELEGATED_ACT, # can be done via Art.7+Art.104
developer_action_if_changed="Trigger full Art.8-15 compliance programme within 3 months",
monitoring_trigger="Commission work programme mentioning Annex III review; delegated act notification"
))
profile.provision_risks.append(ProvisionReviewRisk(
provision="Art.99 Penalty Tiers",
current_rule="€35M/7%, €15M/3%, €7.5M/1.5% tiers",
review_probability="MEDIUM",
change_direction="SME adjustments; possible tier restructuring",
change_pathway=ChangePathway.LEGISLATIVE,
developer_action_if_changed="Update internal liability assessments; D&O insurance review",
monitoring_trigger="Evaluation report assessing enforcement proportionality"
))
if uses_biometrics:
profile.provision_risks.append(ProvisionReviewRisk(
provision="Art.5(1)(h) Real-Time Remote Biometric Identification",
current_rule="Prohibited in public spaces with narrow law enforcement exceptions",
review_probability="HIGH",
change_direction="potential expansion of prohibition scope",
change_pathway=ChangePathway.LEGISLATIVE,
developer_action_if_changed="Halt deployment pending legal review; redesign to avoid Art.5(1)(h)",
monitoring_trigger="EDPB or fundamental rights body reports on biometric AI risks"
))
if has_gpai_model:
threshold = gpai_flops or 0
systemic_risk_threshold = 1e25
proximity = threshold / systemic_risk_threshold if systemic_risk_threshold > 0 else 0
change_direction = "threshold_adjustment"
if proximity > 0.5:
probability = "HIGH" # close to threshold, high risk of crossing after adjustment
elif proximity > 0.1:
probability = "MEDIUM"
else:
probability = "LOW"
profile.provision_risks.append(ProvisionReviewRisk(
provision=f"Art.51(3) GPAI Systemic Risk Threshold (10^25 FLOP)",
current_rule=f"10^25 training FLOP threshold for systemic risk designation",
review_probability=probability,
change_direction=change_direction,
change_pathway=ChangePathway.DELEGATED_ACT,
developer_action_if_changed="Trigger Art.55 obligations (adversarial testing, incident reporting, cybersecurity)",
monitoring_trigger="AI Office annual report citing threshold adequacy; Art.51(3) delegated act notification"
))
return profile
# Example: Build profile for a mid-size AI company
profile = build_eu_ai_act_evaluation_profile(
org_name="ACME AI Systems GmbH",
uses_biometrics=False,
has_gpai_model=True,
gpai_flops=5e24, # 50% of current threshold
annex_iii_categories=["employment_recruitment"],
is_sme=False
)
print(profile.generate_report())
Art.106 Evaluation Readiness Checklist
Evaluation Timeline (5 items)
- ☐ First evaluation deadline (2 August 2029) entered in compliance calendar with 24-month preparation alert (alert date: 2 August 2027)
- ☐ Subsequent evaluation cycles (2033, 2037) entered with same alert cadence
- ☐ Art.97 annual Commission reports tracked as leading indicators for Art.106 evaluation (first report expected 2025–2026)
- ☐ Watch list for Commission "call for evidence" on AI Act implementation (typically 18 months before evaluation deadline)
- ☐ AI Office published documents (guidelines, annual reports) flagged for review as evaluation inputs
Prohibited Practices (Art.5) Monitoring (5 items)
- ☐ All AI systems categorised against each Art.5(1)(a)–(h) prohibited practice with documented rationale for exclusion
- ☐ Adjacent practices (systems near Art.5 prohibitions but currently outside scope) identified and documented
- ☐ Monitoring process in place for Commission consultation documents on Art.5 scope (expected 2027)
- ☐ Emergency response plan if Art.5 is extended to cover current system category (halt/redesign protocol)
- ☐ Whistleblower reporting channel confirms Art.5 concerns can be raised internally before external report
Annex III Classification Watch (5 items)
- ☐ Current AI systems mapped to all eight Annex III categories with explicit inclusion/exclusion analysis
- ☐ Adjacent application areas (not currently in Annex III but in similar risk domain) identified
- ☐ Delegated act monitoring active — Commission work programme includes planned Annex III delegated acts
- ☐ Conformity assessment readiness assessed for each adjacent application area (how quickly could you comply?)
- ☐ Art.7 expansion criteria (risk to fundamental rights, health, safety; volume of potentially affected persons) applied to your system to estimate delegated act expansion risk
GPAI Threshold Planning (5 items)
- ☐ Training compute for all GPAI models documented in FLOPs with date of measurement
- ☐ Current position relative to 10^25 FLOP threshold calculated (absolute and as percentage)
- ☐ Threshold scenario analysis: if threshold drops to 10^24 FLOP, which systems cross?
- ☐ Art.55 obligations (adversarial testing, incident reporting, cybersecurity) assessed for pre-emptive implementation if threshold proximity is >20%
- ☐ AI Office systemic risk designation implementing act monitoring active (Art.51(4) — designation by implementing act can occur at any time, not only at evaluation)
Penalty and SME Review (4 items)
- ☐ Total maximum AI Act fine exposure calculated under current Art.99 tiers (% of global annual turnover)
- ☐ D&O insurance policy reviewed for AI Act regulatory enforcement coverage
- ☐ SME status verified and documented (if applicable) — relevant to evaluation assessment of disproportionate burden
- ☐ Internal compliance cost tracking in place — annual compliance expenditure data available for potential evaluation consultation submission
Consultation Participation (4 items)
- ☐ Public Affairs or Government Affairs function briefed on AI Act evaluation consultation timeline
- ☐ Internal process for preparing consultation responses established (who decides, who drafts, legal review)
- ☐ Industry association membership reviewed — relevant associations likely to submit joint responses to evaluation consultation
- ☐ Public consultation responses will be published — internal review confirms no sensitive information inadvertently disclosed
Documentation and Cloud Act (4 items)
- ☐ Internal AI Act evaluation preparation documents (cost analyses, regulatory risk assessments, legal opinions) stored on EU-sovereign infrastructure
- ☐ CLOUD Act exposure assessment for AI Act compliance documentation updated to include evaluation-period documentation
Related Reading
- EU AI Act Art.104: Exercise of the Delegation — Developer Guide — delegated act mechanism that can implement evaluation recommendations for Annex III and GPAI threshold without waiting for full legislative procedure
- EU AI Act Art.105: Committee Procedure — Comitology Developer Guide — implementing acts mechanism for evaluation recommendations that require standardised procedures and forms
- EU AI Act Art.99: Administrative Fines — Developer Guide — the penalty tier structure that Art.106 explicitly requires to be reviewed for proportionality
- EU AI Act Art.103: Entry into Force and Application Dates — Developer Guide — the application date timeline within which the first evaluation cycle operates