EU AI Act Art.62: AI Office Enforcement Powers over GPAI Models — Corrective Measures, Market Withdrawal, and Emergency Action (2026)
EU AI Act Article 62 is where Chapter VI governance translates into binding legal force against GPAI model providers. Where Art.55 gave the AI Office evaluation powers — the ability to assess whether a GPAI model complies with Chapter V obligations — Art.62 gives the AI Office the toolkit to compel compliance: issuing corrective measures, ordering market withdrawal, and taking emergency action when a GPAI model poses an immediate systemic risk.
The enforcement structure is deliberately centralised: the AI Office, not national competent authorities, exercises Art.62 powers against GPAI model providers. This avoids the 27-NCA fragmentation problem that would arise if each member state could independently compel changes to a GPAI model deployed across the entire EU. A corrective measure issued by the AI Office under Art.62 has effect across all 27 member states simultaneously.
For GPAI model providers — and for the infrastructure operators and cloud providers whose platforms host GPAI model training and inference — Art.62 defines what happens when the AI Office concludes that Chapter V obligations have not been met, and what the legal and operational consequences of that conclusion are.
Art.62 became applicable on 2 August 2025 as part of the phased entry into force of Regulation (EU) 2024/1689.
Art.62 in the Chapter V and Chapter VI Enforcement Architecture
Art.62 sits at the output end of the AI Office's GPAI enforcement cascade:
| Stage | Legal Basis | Action | Binding? |
|---|---|---|---|
| 1. Monitoring | Art.59(4) | AI Office monitors GPAI model landscape for compliance signals | — |
| 2. Evaluation | Art.55 | AI Office requests documentation, conducts model evaluation, engages Scientific Panel | No (evaluation, not enforcement) |
| 3. Enforcement | Art.62 | AI Office issues corrective measures, orders market withdrawal, takes emergency action | Yes |
| 4. Penalty | Art.99-101 | Commission or NCA imposes financial penalties for non-compliance | Yes (monetary) |
Art.62 is the bridge between the evaluation phase (Art.55) and penalties (Art.99-101). A corrective measure under Art.62 is not a penalty — it is an order to remedy non-compliance. Penalties follow if the provider fails to comply with the Art.62 corrective measure within the specified deadline.
Art.62(1): AI Office Information and Access Powers
Before issuing corrective measures, the AI Office must establish that a breach has occurred. Art.62(1) gives the AI Office the investigatory powers necessary to build that evidentiary record.
Information requests: The AI Office can require any GPAI model provider to submit:
- Technical documentation produced under Art.52 (architecture, training data sources, capability assessment)
- Adversarial testing records and methodologies submitted under Art.53
- Incident reports filed under Art.53 and Art.73
- Codes of practice commitments and compliance records under Art.56
- Any other documentation relevant to assessing compliance with Chapter V obligations
Access to model systems: The AI Office can require providers to grant access to their GPAI model systems — including training infrastructure, model weights (where necessary for capability evaluation), and inference environments — for the purpose of conducting or verifying adversarial testing under Art.53. This access can be exercised through the Scientific Panel (Art.61) where technical expertise is required.
Provider cooperation obligation: Providers are legally required to cooperate with Art.62(1) requests. Failure to respond within the AI Office's specified deadline, provision of materially incomplete information, or deliberate obstruction of access constitutes a separate infringement — in addition to any underlying Chapter V breach — and attracts penalties under Art.99.
NCA coordination: Where the information sought relates to a GPAI model's downstream deployment in high-risk AI systems (Annex III), the AI Office coordinates its information requests with the relevant NCAs under the Art.59 AI Board framework. This prevents duplicative requests and ensures the enforcement record is shared across the Chapter V (AI Office) and Chapter III (NCA) enforcement tracks.
Art.62(2): Corrective Measures — Types and Proportionality
Art.62(2) is the core enforcement provision. Once the AI Office has established, through Art.55 evaluation or Art.62(1) investigation, that a GPAI provider has failed to comply with Chapter V obligations, it may issue corrective measures.
Types of corrective measures:
| Measure | Scope | Typical trigger |
|---|---|---|
| Cessation of infringement | Require provider to stop the non-compliant practice (e.g., cease deploying a model version that lacks required documentation) | Art.52 documentation gap, Art.53 testing non-compliance |
| Compliance action | Require provider to take specific positive steps within a deadline (e.g., complete missing adversarial testing, update technical documentation) | Incomplete Art.52/Art.53 compliance |
| Modification of model | Require provider to modify model capabilities, training configuration, or output constraints | Systemic risk finding where current model characteristics create disproportionate risk |
| Suspension of access | Require provider to suspend EU market access for a model version while compliance remediation is underway | Ongoing infringement where partial suspension preserves EU market integrity |
| Market withdrawal | Require provider to withdraw a GPAI model from the EU market | Persistent non-compliance, refusal to cooperate, or serious systemic risk posing immediate harm |
Proportionality obligation: The AI Office is required to issue corrective measures that are proportionate to the nature, gravity, and duration of the infringement. This is not a procedural requirement — it is a substantive constraint on the AI Office's enforcement discretion, and proportionality is reviewable by the Court of Justice of the EU if a provider challenges a corrective measure.
Proportionality factors:
The AI Office must weigh:
- The severity of the compliance gap (missing documentation vs. uncorrected serious incident)
- The provider's prior compliance history (first-time vs. persistent infringer)
- The scope of harm potential (how many users and downstream deployers the non-compliant model affects)
- The provider's cooperation with the AI Office investigation
- Whether the provider self-reported the issue or was identified through external monitoring
- The feasibility of remediation within a given timeframe
Deadlines for compliance: Corrective measures specify a compliance deadline. Deadlines are calibrated to the nature of the required action: providing updated documentation may require 30 days; completing adversarial testing may require 60-90 days; model modification may require 6 months. The AI Office may grant extensions on reasoned application from the provider.
Art.62(3): Market Withdrawal — Conditions and Procedure
Market withdrawal is the most consequential corrective measure available under Art.62. It removes a GPAI model from the EU market entirely, and its effects extend to all downstream deployers who have integrated the model.
Conditions for market withdrawal:
Market withdrawal is reserved for cases where:
- The provider has persistently failed to comply with a prior corrective measure within the specified deadline
- The GPAI model poses a systemic risk that cannot be addressed through less restrictive measures (modification, suspension)
- The provider has fundamentally failed to cooperate with the AI Office investigation, making compliance assessment impossible
- The model's non-compliance creates an immediate and serious risk of harm to fundamental rights, public safety, or societal stability across the EU
Withdrawal vs. suspension: The AI Office distinguishes between market withdrawal (permanent — the model may not re-enter the EU market in its current form) and market suspension (temporary — the model is removed while compliance remediation is completed). Suspension is typically the first instrument; withdrawal follows if suspension is ineffective or the provider fails to complete remediation within the suspension period.
Downstream effects on deployers: Market withdrawal by the AI Office does not automatically terminate existing deployer contractual relationships with the GPAI provider — but it makes continued deployment of the withdrawn model a Chapter V infringement for the deployer. Deployers who continue using a withdrawn GPAI model after the withdrawal decision's effective date expose themselves to NCA enforcement under Chapter VI and to Art.99 penalties.
Legal challenge: Providers subject to a market withdrawal decision may challenge it before the Court of Justice of the EU. Art.62 decisions are reviewable for proportionality, procedural compliance (whether the AI Office followed the required procedural steps before issuing the decision), and compliance with fundamental rights (including the provider's right to conduct business under Art.16 of the EU Charter).
Art.62(4): Emergency Measures — Interim Action for Immediate Risk
Art.62(4) is the AI Office's emergency enforcement toolkit — available when a GPAI model poses an immediate, serious risk that cannot wait for the standard corrective measure procedure.
Standard vs. emergency procedure:
| Dimension | Standard (Art.62(2)) | Emergency (Art.62(4)) |
|---|---|---|
| Prior evaluation | Required (Art.55 process) | Not required — immediate risk displaces it |
| Provider consultation | Required (right to be heard) | Abbreviated — provider notified simultaneously with measure |
| Proportionality assessment | Full | Assessed on available evidence |
| Effective date | After compliance deadline | Immediate (or within 24-48 hours) |
| Duration | Until compliance | Interim pending full Art.62(2) procedure |
Triggering conditions for emergency measures: The AI Office may issue emergency measures where:
- A GPAI model is actively being exploited to cause harm to fundamental rights, safety, or public order
- The model has generated serious incidents (Art.73) at a frequency or severity indicating systemic risk that has not been captured by prior evaluation
- The Scientific Panel has issued an own-initiative alert (Art.61(4)(e)) indicating immediate systemic risk requiring regulatory action before a full evaluation can be completed
- An NCA has requested emergency AI Office action based on national-level evidence of immediate harm
Scope of emergency measures: Emergency measures under Art.62(4) may include:
- Immediate suspension of EU market access (model taken offline or access blocked for EU users)
- Requirement to implement specific technical constraints within 24-72 hours (e.g., output filters, rate limits on high-risk query types)
- Notification obligations to downstream deployers (provider must immediately inform all EU deployers of the interim measure)
Transition to standard procedure: Emergency measures are interim by nature. The AI Office must initiate the standard Art.62(2) corrective measure procedure promptly after issuing emergency measures, and the emergency measure lapses or transitions into a standard corrective measure following the standard procedure's conclusion.
Art.62(5): Enforcement Boundary — AI Office vs. NCAs
One of the most significant institutional design choices in Chapter VI is the division of enforcement authority between the AI Office and national competent authorities. Art.62(5) clarifies how that boundary operates in practice.
AI Office exclusive jurisdiction: The AI Office has exclusive enforcement jurisdiction over Chapter V obligations — the GPAI model obligations in Art.51-56. No NCA can issue corrective measures against a GPAI provider on the basis of Art.51-56 non-compliance. This exclusivity prevents the fragmentation of GPAI regulation into 27 different national enforcement patterns.
NCA residual jurisdiction: NCAs retain enforcement jurisdiction over Chapter III high-risk AI system obligations (Art.8-27) even where those high-risk AI systems are built on GPAI model foundations. An NCA can enforce against a downstream deployer's failure to comply with Art.14 (human oversight) or Art.17 (quality management) even if the root cause is a GPAI model capability issue — but the NCA cannot directly compel the GPAI provider to change the model.
Coordination mechanism: When an NCA investigation reveals evidence of GPAI model non-compliance at the Chapter V level, Art.62(5) requires the NCA to refer the matter to the AI Office for enforcement action rather than attempting direct action against the GPAI provider. The AI Office then decides whether to open an Art.62 enforcement proceeding. This referral mechanism preserves NCA investigatory capacity while maintaining AI Office enforcement primacy.
Practical enforcement chain: For a deployed high-risk AI system (e.g., an employment screening tool built on a GPAI model):
- NCA discovers that the employment screening tool's AI is making discriminatory decisions
- NCA investigates and finds the root cause is a capability in the underlying GPAI model
- NCA enforces against the deployer under Art.14 (human oversight failure) at national level
- NCA simultaneously refers GPAI model findings to the AI Office
- AI Office assesses whether the model capability issue constitutes a Chapter V breach
- AI Office issues Art.62 corrective measures against the GPAI provider if Chapter V breach is confirmed
Art.62(6): Cooperation with the European AI Board and NCAs
Art.62(6) establishes the information and coordination obligations that govern AI Office enforcement action vis-à-vis the AI Board and national authorities.
AI Board notification: The AI Office must notify the European AI Board of all Art.62 corrective measures as they are issued. This ensures that NCAs — as AI Board members — are immediately aware of AI Office enforcement actions affecting GPAI models deployed in their member states, enabling them to coordinate their downstream enforcements without duplicating AI Office action.
NCA information sharing: Where an Art.62 corrective measure affects GPAI model deployment in specific member states, the AI Office provides the affected NCAs with the enforcement record underlying the corrective measure. NCAs use this record to calibrate their national enforcement against deployers using the affected model — they do not need to independently reconstruct the AI Office's evidentiary basis.
Cross-reference with NCA market withdrawal: If the AI Office orders market withdrawal of a GPAI model under Art.62(3), NCAs in all 27 member states must ensure that deployers in their jurisdiction comply with the withdrawal. The NCAs become the national execution layer for the AI Office's EU-wide withdrawal decision: they can take enforcement action against deployers who continue using a withdrawn model even though the NCAs themselves did not issue the withdrawal.
Art.62(7): Publication and Transparency
Art.62(7) requires the AI Office to publish Art.62 corrective measure decisions in a manner that makes the EU GPAI enforcement landscape visible to downstream deployers, other providers, and the public.
Mandatory publication: All Art.62 corrective measure decisions — including market withdrawal orders — are published in the EU Official Journal and on the AI Office's website. Publication includes:
- The identity of the GPAI provider
- The model(s) subject to the measure
- The nature and scope of the corrective measure
- The deadline for compliance
- The legal basis and summary of the findings
Commercial sensitivity carve-out: The AI Office may redact from published decisions information that qualifies as trade secrets or commercially sensitive under Art.78 — for example, specific technical documentation content that was assessed during the investigation. The published decision identifies the existence and nature of the violation; it does not necessarily publish the complete investigation record.
Register of decisions: All Art.62 decisions are entered into the EU AI database (Art.60) under the relevant GPAI model entries, creating a permanently accessible record of enforcement action tied to the model's EUID. This means downstream deployers evaluating a GPAI model for integration can check the EUID record for prior enforcement actions before entering into provider agreements.
CLOUD Act Implications for Art.62 Enforcement
Art.62 enforcement actions create a specific CLOUD Act intersection for GPAI providers whose infrastructure includes US-jurisdiction components.
Enforcement documentation as dual-jurisdiction data: The Art.62 investigation record — technical documentation, model evaluation results, corrective measure correspondence — is produced and held in both the provider's systems (potential CLOUD Act scope if US-incorporated) and the AI Office's systems (EU public authority data, not CLOUD Act scope). Where providers store enforcement-relevant documentation on US cloud infrastructure, that documentation sits within CLOUD Act jurisdiction during the period the AI Office enforcement proceeding is live.
Model weights in enforcement context: If the AI Office requires access to model weights for an Art.62(1) investigation (e.g., to verify capability claims made in Art.52 documentation), and those weights are stored on US-incorporated infrastructure, the CLOUD Act creates a theoretical channel through which the same weights could be subject to concurrent US government access requests. While this theoretical exposure does not change the provider's Art.62 obligations, it represents a structural compliance risk that EU-sovereign infrastructure directly eliminates.
Withdrawal and US-hosted inference: If the AI Office orders market withdrawal of a GPAI model under Art.62(3), and that model's inference infrastructure is hosted in US data centres serving EU users, the withdrawal order must be implemented at the API access layer to block EU user requests. For providers using US-hosted inference, implementing the withdrawal requires coordinating with the US-incorporated infrastructure operator — who may face CLOUD Act exposure for their own operational records of the model's EU deployment.
Structured data point for GPAI providers: Providers whose model training, inference, and documentation infrastructure is hosted on EU-sovereign platforms have a simpler Art.62 compliance posture: enforcement documentation sits entirely in EU-law jurisdiction, model weight access for AI Office evaluation is governed exclusively by EU data protection law, and market withdrawal implementation is managed within infrastructure subject only to EU authority.
Python Implementation: AI Office Enforcement Action Tracker
from dataclasses import dataclass, field
from datetime import date, timedelta
from enum import Enum
from typing import Optional
import uuid
class MeasureType(Enum):
CESSATION = "cessation_of_infringement"
COMPLIANCE_ACTION = "compliance_action"
MODEL_MODIFICATION = "model_modification"
MARKET_SUSPENSION = "market_suspension"
MARKET_WITHDRAWAL = "market_withdrawal"
EMERGENCY_INTERIM = "emergency_interim"
class EnforcementStatus(Enum):
INVESTIGATION = "investigation" # Art.62(1) — information gathering
MEASURE_ISSUED = "measure_issued" # Art.62(2) — corrective measure in effect
COMPLIANCE_PENDING = "compliance_pending" # Provider remediation in progress
COMPLIED = "complied" # Provider has remediated
NON_COMPLIED = "non_complied" # Deadline missed — escalate to penalties
WITHDRAWN = "withdrawn" # Market withdrawal executed
CHALLENGED = "challenged" # Provider has filed CJEU challenge
class InfringementBasis(Enum):
ART_52_DOCUMENTATION = "art_52_documentation_gap"
ART_53_ADVERSARIAL_TESTING = "art_53_testing_non_compliance"
ART_53_INCIDENT_REPORTING = "art_53_incident_reporting_failure"
ART_54_NO_AUTHORISED_REP = "art_54_no_authorised_representative"
ART_55_EVALUATION_OBSTRUCTION = "art_55_cooperation_failure"
ART_56_COP_INADEQUACY = "art_56_code_of_practice_inadequacy"
ART_51_SYSTEMIC_RISK_UNADDRESSED = "art_51_systemic_risk_unaddressed"
@dataclass
class CorrectiveMeasure:
"""Tracks an Art.62 corrective measure lifecycle."""
# Identification
measure_id: str = field(default_factory=lambda: f"CM-{str(uuid.uuid4())[:8].upper()}")
measure_type: MeasureType = MeasureType.COMPLIANCE_ACTION
status: EnforcementStatus = EnforcementStatus.INVESTIGATION
# Subject
provider_name: str = ""
model_name: str = ""
model_euid: Optional[str] = None # From Art.60 EU AI database
# Infringement
infringement_bases: list[InfringementBasis] = field(default_factory=list)
systemic_risk_finding: bool = False
prior_corrective_measures: int = 0 # Number of prior measures — affects proportionality
# Timeline
investigation_opened: Optional[date] = None # Art.62(1) triggered
measure_issued_date: Optional[date] = None # Art.62(2) decision date
compliance_deadline: Optional[date] = None # Deadline to comply
compliance_achieved_date: Optional[date] = None
withdrawal_effective_date: Optional[date] = None
# Emergency flag
emergency_measure: bool = False # Art.62(4)
# Infrastructure context
provider_eu_incorporated: bool = False
documentation_on_eu_sovereign_infra: bool = False
inference_on_eu_sovereign_infra: bool = False
def days_until_deadline(self) -> Optional[int]:
if self.compliance_deadline is None:
return None
return (self.compliance_deadline - date.today()).days
def is_overdue(self) -> bool:
if self.compliance_deadline is None or self.status == EnforcementStatus.COMPLIED:
return False
return date.today() > self.compliance_deadline
def proportionality_factors(self) -> dict:
severity_score = len(self.infringement_bases)
if self.systemic_risk_finding:
severity_score += 3
if self.prior_corrective_measures > 0:
severity_score += self.prior_corrective_measures * 2
return {
"infringement_bases": [b.value for b in self.infringement_bases],
"systemic_risk_finding": self.systemic_risk_finding,
"prior_measures": self.prior_corrective_measures,
"severity_score": severity_score,
"measure_proportionate": (
self.measure_type == MeasureType.MARKET_WITHDRAWAL
and severity_score >= 5
) or (
self.measure_type in [
MeasureType.CESSATION,
MeasureType.COMPLIANCE_ACTION,
MeasureType.MODEL_MODIFICATION,
]
and severity_score < 5
),
}
def cloud_act_risk_profile(self) -> dict:
risks = []
if not self.provider_eu_incorporated:
risks.append("Provider not EU-incorporated: CLOUD Act may reach provider entity")
if not self.documentation_on_eu_sovereign_infra:
risks.append("Documentation on non-EU infra: enforcement record in CLOUD Act scope")
if not self.inference_on_eu_sovereign_infra:
risks.append("Inference on non-EU infra: withdrawal implementation requires US operator coordination")
return {
"risks": risks,
"low_exposure": len(risks) == 0,
"recommendation": (
"Low CLOUD Act exposure — EU-incorporated + EU-sovereign infrastructure"
if len(risks) == 0
else f"{len(risks)} CLOUD Act exposure factor(s) — review infrastructure jurisdiction"
),
}
def advance_status(self, new_status: EnforcementStatus) -> None:
self.status = new_status
if new_status == EnforcementStatus.WITHDRAWN:
self.withdrawal_effective_date = date.today()
elif new_status == EnforcementStatus.COMPLIED:
self.compliance_achieved_date = date.today()
def enforcement_summary(self) -> dict:
return {
"measure_id": self.measure_id,
"provider": self.provider_name,
"model": self.model_name,
"euid": self.model_euid,
"type": self.measure_type.value,
"status": self.status.value,
"emergency": self.emergency_measure,
"days_until_deadline": self.days_until_deadline(),
"overdue": self.is_overdue(),
"proportionality": self.proportionality_factors(),
"cloud_act_profile": self.cloud_act_risk_profile(),
}
# Example: tracking an Art.62 enforcement proceeding
measure = CorrectiveMeasure(
measure_type=MeasureType.COMPLIANCE_ACTION,
provider_name="ExampleGPAI Corp",
model_name="flagship-llm-v3",
model_euid="2025-GPAI-SR-0042",
infringement_bases=[
InfringementBasis.ART_52_DOCUMENTATION,
InfringementBasis.ART_53_ADVERSARIAL_TESTING,
],
systemic_risk_finding=True,
investigation_opened=date(2025, 9, 1),
measure_issued_date=date(2025, 10, 15),
compliance_deadline=date(2025, 12, 15),
provider_eu_incorporated=True,
documentation_on_eu_sovereign_infra=True,
inference_on_eu_sovereign_infra=True,
)
print(f"Measure ID: {measure.measure_id}")
print(f"Days until deadline: {measure.days_until_deadline()}")
print(f"Proportionality: {measure.proportionality_factors()}")
print(f"CLOUD Act profile: {measure.cloud_act_risk_profile()['recommendation']}")
print(f"Summary: {measure.enforcement_summary()}")
What Art.62 Means for GPAI Providers
Art.62 defines the operational consequences of Chapter V non-compliance. Understanding its enforcement cascade shapes how providers should structure their compliance programmes.
Documentation quality directly affects enforcement exposure: The Art.62(1) investigation builds on the documentation record the provider has already produced under Art.52 and Art.53. Providers who maintain high-quality, complete documentation reduce the AI Office's need for extensive Art.62(1) information requests and present a compliance record that constrains AI Office enforcement discretion toward less restrictive measures.
Cooperation substantially affects outcome: The AI Office's proportionality assessment for corrective measures explicitly weighs provider cooperation. Providers who respond fully and promptly to Art.62(1) requests, even when they disagree with the AI Office's preliminary findings, are in a materially better position for the corrective measure calibration than providers who obstruct or delay the investigation.
Prior corrective measures escalate consequences: The proportionality framework treats prior corrective measures as aggravating factors. A second Art.62(2) corrective measure for the same provider is likely to be more severe than the first; a market withdrawal order following a prior corrective measure is substantially more supportable on proportionality grounds than a first-instance withdrawal.
Downstream deployers bear execution risk: Market withdrawal decisions under Art.62(3) create immediate operational risk for downstream deployers who have built products on the withdrawn GPAI model. Deployers who monitor the EU AI database (Art.60) for enforcement actions against their GPAI providers have advance warning that reduces the operational disruption of withdrawal orders.
Emergency measures have no warning: Art.62(4) emergency measures can take effect within 24-48 hours without the standard consultation procedure. For providers whose models attract serious incident reporting or Scientific Panel own-initiative alerts, maintaining a continuous compliance posture — not a point-in-time audit approach — is the only effective risk management strategy against emergency enforcement.
Art.62 Compliance Checklist
| # | Obligation | Who | Timing |
|---|---|---|---|
| 1 | Maintain Art.52 documentation in complete, current form accessible for AI Office investigation | GPAI provider | Ongoing |
| 2 | Maintain Art.53 adversarial testing records and incident reports for AI Office access | GPAI provider | Ongoing |
| 3 | Respond fully to Art.62(1) information requests within AI Office deadline | GPAI provider | On request |
| 4 | Cooperate with model access requests for AI Office evaluation activities | GPAI provider | On request |
| 5 | Implement corrective measures within compliance deadlines specified in Art.62(2) decisions | GPAI provider | Per decision |
| 6 | Notify all EU deployers immediately upon receiving market suspension or withdrawal decision | GPAI provider | On decision receipt |
| 7 | Monitor EU AI database EUID record for enforcement actions against models you deploy | GPAI deployer | Ongoing |
| 8 | Cease use of withdrawn GPAI models by withdrawal effective date | GPAI deployer | Per withdrawal decision |
| 9 | Store enforcement-relevant documentation on EU-sovereign infrastructure to minimise CLOUD Act exposure | GPAI provider / infra operator | Infrastructure planning |
| 10 | Implement Art.62(4) emergency measures within 24-48 hours of AI Office notification | GPAI provider | On emergency measure |
| 11 | Refer evidence of GPAI Chapter V non-compliance to AI Office (do not attempt direct NCA enforcement against GPAI providers) | NCA | On evidence discovery |
| 12 | Track Scientific Panel own-initiative alerts (Art.61(4)(e)) as early indicators of AI Office enforcement direction | GPAI provider | Ongoing |
| 13 | Maintain documented proportionality record for corrective measure challenges before CJEU | GPAI provider (legal team) | Ongoing |
Series Context: Chapter VI Governance Framework
| Article | Coverage | Post |
|---|---|---|
| Art.57 | National Competent Authorities — designation, tasks, independence | Art.57 guide |
| Art.58 | NCA enforcement powers — investigation, access, corrective measures | Art.58 guide |
| Art.59 | AI Board — composition, independence, NCA coordination | Art.59 guide |
| Art.60 | EU AI database — public registry, EUID governance, Commission management | Art.60 guide |
| Art.61 | Scientific Panel — independent experts, model evaluation, AI Office advisory | Art.61 guide |
| Art.62 | AI Office enforcement powers — corrective measures, market withdrawal, emergency action | This guide |
| Art.63 | Advisory forum — stakeholder input, governance consultation | Art.63 guide |
EU AI Act Art.62 analysis based on Regulation (EU) 2024/1689 as published in the Official Journal of the European Union. Applicable from 2 August 2025 per Art.113(3). Specific enforcement procedures, information request formats, and corrective measure templates will be established through AI Office implementing acts and procedural rules. This guide reflects the text of the Regulation as enacted.