EU AI Act Art.74: Market Surveillance and Control of High-Risk AI Systems — NCA Powers, Real-World Testing, and GPAI AI Office Jurisdiction (2026)
EU AI Act Article 74 is the enforcement backbone of the post-deployment compliance framework for high-risk AI systems. Where Art.72 and Art.73 impose positive obligations on providers and deployers, Art.74 empowers market surveillance authorities (MSAs) — national competent authorities in their market surveillance capacity — to actively verify compliance, test AI systems under actual operational conditions, access source code and documentation, take corrective measures, and coordinate enforcement across Member State borders.
For developers and compliance teams, Art.74 is the article that makes the rest of the AI Act's high-risk obligations enforceable. An NCA acting under Art.74 can compel system access, test your AI system in real-world conditions using your deployers' infrastructure, obtain your training data and technical documentation, issue binding corrective measures up to market withdrawal, and share enforcement findings with every other MSA in the Union. Understanding Art.74 is therefore not merely a compliance exercise — it is understanding the enforcement posture that your post-market monitoring system (Art.72), incident reporting infrastructure (Art.65, Art.73), and conformity documentation (Art.9, Art.11) will face in a regulatory investigation.
The Commission has positioned Art.74 as the operational interface between the AI Act's obligations and Regulation (EU) 2019/1020 (the Market Surveillance Regulation), which provides the procedural framework within which NCAs exercise their Art.74 powers. Understanding both instruments is necessary for anticipating what an MSA investigation looks like in practice.
Art.74 in the Post-Deployment Obligations Architecture
Art.74 closes the enforcement loop that Art.72 and Art.73 initiate. The information generated through provider post-market monitoring and deployer incident reporting flows toward the MSA, which uses Art.74 powers to act on it:
| Article | Obligation | Relation to Art.74 |
|---|---|---|
| Art.9 | Quality management system | MSA evaluates QMS adequacy as part of Art.74 documentary check |
| Art.11 | Technical documentation | MSA may access full technical documentation under Art.74(3) |
| Art.43 | Conformity assessment | MSA may re-evaluate conformity assessment basis under Art.74(2) |
| Art.58 | NCA investigative powers | Art.58 powers are the legal vehicle for Art.74 market surveillance activities |
| Art.64 | Access to data and documentation | Art.64 access rights apply within Art.74 market surveillance procedures |
| Art.65 | Serious incident reporting | Art.74 investigations may be triggered by Art.65 provider notifications |
| Art.66 | Market surveillance information exchange | Art.74 enforcement findings are shared via Art.66 RAPEX/ICSMS channels |
| Art.67 | Union safeguard procedure | Art.74 MSA measures that are challenged proceed via Art.67 Commission review |
| Art.72 | Post-market monitoring (provider) | MSA evaluates whether provider's Art.72 monitoring system is functioning |
| Art.73 | Deployer obligations | Deployer cooperation under Art.73(4) enables Art.74(5) real-world testing |
| Art.74 | Market surveillance and control | This guide |
Art.74(1): Market Surveillance Activities Under Regulation (EU) 2019/1020
Art.74(1) establishes that market surveillance authorities shall perform market surveillance activities and take measures under this Article in accordance with the framework provided by Regulation (EU) 2019/1020 (the Market Surveillance Regulation, "MSR"). The MSR applies to AI systems as it does to any other product regulated under Union harmonisation legislation, modified by the specific provisions of the AI Act where the two instruments diverge.
What the MSR framework provides. Regulation (EU) 2019/1020 gives MSAs the following baseline powers that apply directly to AI systems under Art.74:
- Documentary checks of technical documentation, conformity declarations, and conformity assessment results
- Physical and technical inspections of products, including testing and sampling
- Mystery shopping and test purchases
- Access to any software embedded in or used by the product
- On-site inspections of manufacturing facilities, warehouses, and operational environments
- Orders to operators to produce information, samples, and technical documentation
- Temporary prohibition of placing a product on the market or making it available
- Recall and market withdrawal of non-compliant products
- Destruction of non-compliant products in cases where the risk warrants it
These powers transfer directly to AI systems, with the AI Act's Art.74 provisions layering additional AI-specific powers on top of the MSR baseline.
Coordination mandate. Art.74(1) also establishes that MSAs shall coordinate and cooperate with market surveillance authorities of other Member States. This is not optional: an MSA that initiates an investigation of a high-risk AI system deployed across multiple Member States must notify and coordinate with the other MSAs where the system is deployed. The practical mechanism for this coordination is Art.66 (information exchange via RAPEX/ICSMS) and the AI Board consultation procedures.
Art.74(2): Full Market Surveillance — Compliance Assessment and Corrective Measures
Art.74(2) establishes the core scope of MSA market surveillance activity: authorities shall exercise full market surveillance of AI systems placed on the Union market or put into service, assess compliance with the requirements of this Regulation, and take measures where they find non-compliance.
What "full market surveillance" means. The scope covers:
- High-risk AI systems placed on the market (initial compliance assessment at time of placement)
- High-risk AI systems put into service (deployment-phase compliance including post-market monitoring and incident reporting obligations)
- Ongoing compliance during the operational lifetime of the system (not a one-time gate)
- Post-market monitoring system quality (whether the provider's Art.72 monitoring system is actually functioning)
Compliance dimensions assessed. An Art.74(2) compliance assessment may examine:
- Conformity assessment documentation and the basis for the provider's CE marking
- Technical documentation completeness and accuracy (Art.11)
- QMS design and operational effectiveness (Art.9)
- Training data governance (data quality requirements, Art.10)
- Post-market monitoring plan design and execution (Art.72)
- Incident reporting records (Art.73 deployer notifications, Art.65 provider MSA reports)
- Human oversight implementation (Art.14)
- Transparency and information obligations toward deployers and users (Art.13)
Corrective measures. Where an MSA finds non-compliance under Art.74(2), it may:
- Order the provider to bring the AI system into compliance within a specified timeframe
- Restrict or prohibit making the system available on the market
- Order withdrawal of the system from service
- Refer the case to the AI Board or Commission where the finding has Union-wide implications
Art.74(3): Documentary Checks, Sampling, and Technical Testing
Art.74(3) specifies the concrete investigative tools available to MSAs: documentary checks and, where appropriate, technical testing of sampled AI systems for compliance with the applicable requirements.
Documentary checks. The first-line tool in any Art.74 investigation is the documentary review. An MSA will typically request:
- Full technical documentation (Art.11 documentation package)
- EU Declaration of Conformity (Art.47)
- Conformity assessment records (Art.43), including third-party assessments from notified bodies
- Quality management system documentation (Art.9)
- Post-market monitoring plan and monitoring data collected to date (Art.72)
- Incident reports filed by deployers (Art.73 notifications) and by the provider to the MSA (Art.65 notifications)
- Training data documentation and data governance records (Art.10)
- Instructions for use and transparency information provided to deployers (Art.13)
Providers must maintain these documents in a form that can be produced to an MSA on short notice. The 10-year retention period for technical documentation under Art.18 is specifically calibrated to the market surveillance lifecycle.
Sampling and technical testing. Where documentary review is insufficient to assess compliance, the MSA may sample the AI system — obtaining access to a live or test instance — and conduct technical testing. Testing may evaluate:
- System performance against the claimed accuracy and reliability metrics
- Robustness to adversarial inputs
- Compliance with Art.5 prohibited practice prohibitions (e.g., subliminal manipulation testing)
- Output consistency with the provider's declared intended purpose (Art.13)
- Bias and discrimination characteristics for Annex III category systems
For software-only AI systems, "sampling" typically means access to the deployed system or a representative test environment, not physical product sampling.
Art.74(4): AI Office as Competent Authority for GPAI Models
Art.74(4) carves out general-purpose AI models (GPAI models) from national MSA jurisdiction: for GPAI models, the AI Office is the competent authority with market surveillance powers under this Regulation.
Why GPAI models require a separate track. GPAI models are typically provided by a small number of providers (primarily large foundation model companies) and deployed across all 27 Member States simultaneously. National MSA jurisdiction would create fragmented, potentially conflicting enforcement and would be impractical for models that are inherently cross-border infrastructure. The AI Office's supranational jurisdiction resolves this by creating a single enforcement point for foundation model compliance.
AI Office market surveillance powers for GPAI. Under Art.74(4) in conjunction with Art.62, the AI Office may:
- Conduct evaluations of GPAI models for systemic risk (Art.55 evaluations)
- Request model documentation, training data access, and technical information
- Issue corrective measures to GPAI model providers
- Withdraw a GPAI model's classification from the general-purpose category (or add systemic-risk designation) based on evaluation findings
- Coordinate with national MSAs for AI systems built on top of the GPAI model (the "downstream" investigation chain)
Downstream implications. An AI Office investigation of a GPAI model under Art.74(4) has direct consequences for downstream high-risk AI system providers who built on that model. If the AI Office finds systemic risk in a GPAI component, this triggers re-evaluation obligations for all downstream systems using that component — even if those downstream systems passed conformity assessment when the GPAI model was considered compliant.
Practical developer action. GPAI model users who build high-risk AI systems should:
- Track AI Office GPAI evaluation proceedings (published in the EU AI Office register)
- Maintain records of which GPAI model version was integrated into their system and when
- Ensure their QMS has a GPAI-component change management procedure that triggers re-evaluation when AI Office findings affect the integrated model
Art.74(5): Real-World Condition Testing With Deployer Cooperation
Art.74(5) authorises MSAs to test high-risk AI systems under actual conditions of use — in the field, with real deployers, processing real operational data — and specifies that this testing shall be conducted in cooperation with deployers.
What real-world testing involves. Unlike laboratory testing of a system instance, real-world condition testing assesses the AI system's behaviour in its actual operational environment:
- Testing the system as deployed by a specific deployer, not a test sandbox
- Using the deployer's actual data pipelines, user populations, and operational configurations
- Observing outputs against the real consequences they would produce in operational use
- Assessing the effectiveness of the human oversight arrangements actually in place (Art.14)
Deployer cooperation obligation. The Art.74(5) requirement that real-world testing be conducted "in cooperation with deployers" cross-references Art.73(4), which imposes a mandatory cooperation obligation on deployers in MSA investigations. In practice, real-world testing requires the deployer to:
- Provide access to the deployed system in its operational configuration
- Allow MSA personnel (or MSA-authorised auditors) to observe system outputs in real operational conditions
- Provide access to the deployer's own monitoring data, incident logs, and override records
- Designate a technical liaison for the duration of the testing exercise
Advance notice and operational protection. The MSR framework, applied through Art.74(1), allows MSAs to conduct testing with or without advance notice. Mystery shopping provisions (applicable under Art.74(1)) permit unannounced testing. In practice, for high-risk AI systems with significant operational consequences, MSAs typically notify operators in advance to coordinate access and protect data subject rights under GDPR.
Data protection during testing. Real-world testing necessarily involves the MSA accessing production data, including personal data processed by the AI system. MSAs exercising Art.74(5) powers are subject to GDPR constraints on the purpose limitation and data minimisation of the personal data they access. MSAs must have a legal basis under GDPR for their processing of personal data accessed during investigations (typically Art.6(1)(c) GDPR — compliance with a legal obligation).
Art.74(6): Source Code Access and Technical Information
Art.74(6) provides that where necessary and justified, MSAs may request access to the source code of the high-risk AI system. This is one of the most legally significant provisions in Art.74 for providers.
When source code access is "necessary and justified." The MSA must demonstrate necessity — source code access is not the default starting point for a documentary investigation. Typical triggers for source code requests:
- Documentary checks and technical testing have produced inconclusive results regarding a specific algorithmic behaviour
- The provider's technical documentation is insufficient to evaluate a specific compliance dimension (e.g., bias properties, subliminal manipulation risk under Art.5)
- A serious incident under Art.65 has been reported and the MSA needs to understand the system's decision logic to attribute causation
- The conformity assessment documentation does not adequately describe the system's actual algorithmic operation
Scope of source code access. Art.74(6) access extends beyond source code to any information necessary to assess compliance, including:
- Training code and data preprocessing pipelines
- Model architecture specifications and configuration files
- Testing and validation scripts used in the conformity assessment process
- Fine-tuning and post-training modification records
Confidentiality protections. Source code obtained by an MSA under Art.74(6) is subject to professional secrecy obligations under Art.78 of the AI Act. The MSA may not disclose trade secrets beyond what is necessary for enforcement purposes and may not share source code with competitors or the public. In cross-border investigations, source code shared with other MSAs under Art.66 information exchange procedures is subject to equivalent confidentiality protections in the receiving Member State.
CLOUD Act conflict. For providers whose source code repositories and training infrastructure are hosted on US cloud services, Art.74(6) source code requests create a potential dual-compellability scenario: the EU MSA requests access under Art.74(6), while a US government CLOUD Act production order could simultaneously compel the US cloud provider to produce the same source code. EU cloud infrastructure eliminates the US-government-access pathway while preserving MSA access under Art.74(6).
Art.74(7): Provider Cooperation Obligation
Art.74(7) imposes a mandatory cooperation obligation on providers (and, where applicable, notified bodies) in market surveillance activities. Providers must enable MSAs to carry out their Art.74 activities without obstruction.
What cooperation requires. In the context of an active Art.74 investigation, provider cooperation includes:
- Responding to MSA information requests within the timeframes specified
- Granting access to premises, systems, and personnel as required
- Not taking actions that impede or delay the investigation (e.g., deleting relevant logs, modifying system configurations to obscure non-compliance)
- Providing accurate and complete information — deliberately misleading an MSA is a separate compliance violation
- Designating a single point of contact for the investigation
Failure to cooperate as an independent violation. Art.74(7) non-compliance is actionable independently of any underlying substantive AI Act violation. A provider who refuses to produce technical documentation, denies MSA access to its system, or obstructs an investigation faces penalty exposure under Art.70 (EUR 15M or 3% of annual worldwide turnover for violations of provider obligations) even if the AI system itself is compliant.
Notified body cooperation. Where a high-risk AI system was assessed by a third-party notified body under Art.43, that notified body must also cooperate with MSA investigations affecting the conformity assessment it issued. This includes producing the conformity assessment records, test results, and any concerns the notified body documented during assessment.
Art.74(8): Cross-Border Market Surveillance Coordination
Art.74(8) establishes that where market surveillance authorities in different Member States have reason to believe that an AI system poses a risk, they shall cooperate with each other to share information and coordinate enforcement.
Triggers for cross-border coordination. An MSA in Member State A triggers Art.74(8) coordination when:
- The AI system under investigation is deployed in multiple Member States
- An Art.65 serious incident notification has been received from a deployer operating in multiple Member States
- The MSA's testing has identified risk characteristics that would affect users in other Member States
- The MSA intends to take a corrective measure (restriction or withdrawal) and this measure would affect markets in other Member States
The Art.66 information exchange mechanism. Cross-border coordination under Art.74(8) uses the RAPEX/ICSMS infrastructure established under Art.66. An MSA triggering cross-border coordination must notify the AI Board (as information recipient) and each MSA in an affected Member State, providing:
- The nature and scope of the risk identified
- The AI system's identification details (EUID from the EU AI Database, Art.60)
- The corrective measures the initiating MSA is considering
- An invitation to the other MSAs to submit observations within a specified timeframe
Where the receiving MSA disagrees with the initiating MSA's corrective measure, the Union safeguard procedure under Art.67 applies — the disagreement is escalated to the Commission for resolution.
AI Board role in coordination. For cases where multiple Member States are coordinating, the AI Board may establish an ad hoc coordination group to manage the investigation. This avoids duplicative enforcement actions while ensuring that the provider faces a single coordinated investigation rather than parallel national investigations.
Art.74(9) and Art.74(10): Customs Cooperation at Union Borders
Art.74(9) and Art.74(10) address the importation of AI systems from third countries, requiring customs authorities and market surveillance authorities to cooperate to monitor compliance at Union borders.
The border control challenge for AI. Unlike physical goods, AI systems often cross Union borders as software — delivered via API, SaaS access, or downloadable deployment. Art.74(9)'s customs cooperation provisions are primarily relevant for AI systems embedded in physical hardware (e.g., computer vision systems in industrial equipment, embedded AI in consumer devices) that physically cross EU customs borders.
What customs cooperation involves. Under Art.74(9)-(10):
- Customs authorities shall provide MSAs with relevant information on high-risk AI systems detected during border inspections
- MSAs may instruct customs to hold AI-embedded products at the border pending compliance verification
- Customs declarations for products containing high-risk AI systems must identify the AI component (per Art.47 requirements for third-country providers)
Third-country providers. For AI systems placed on the EU market by providers established outside the EU, Art.74's enforcement mechanism relies on the authorised representative requirement (Art.22). The authorised representative bears the same cooperation obligations as a Union-based provider under Art.74(7). The EU AI Database registration (Art.60) provides MSAs with the authorised representative's contact information for investigation purposes.
CLOUD Act Interaction With Art.74 Market Surveillance
Art.74's powers create multiple CLOUD Act conflict scenarios for providers using US cloud infrastructure:
| Art.74 Power | US Cloud Scenario | Dual-Compellability Risk |
|---|---|---|
| Art.74(3) documentary check | Technical documentation stored in US cloud | EU MSA + US government can both compel access |
| Art.74(3) technical testing | AI model weights on US cloud platform | US government can access model weights independently of EU investigation |
| Art.74(5) real-world testing | Operational data processed on US cloud | Operational data in scope for both EU MSA investigation and CLOUD Act production orders |
| Art.74(6) source code access | Source code in US-hosted repository | Code accessible to both EU MSA and US government simultaneously |
| Art.74(8) cross-border sharing | EU MSA shares investigation findings | Investigation strategy visible to US government if shared infrastructure is used |
The practical mitigation for providers subject to Art.74 investigations is to host technical documentation, model infrastructure, training data, and source code on EU-jurisdiction infrastructure — removing the US-government-access pathway while preserving EU MSA access under Art.74.
Python MarketSurveillanceTracker Implementation
This implementation provides a compliance tracking layer for Art.74 market surveillance interactions:
from dataclasses import dataclass, field
from datetime import datetime, timedelta
from enum import Enum
from typing import Optional
class MSARequestType(Enum):
DOCUMENTARY_CHECK = "documentary_check" # Art.74(3)
TECHNICAL_TESTING = "technical_testing" # Art.74(3)
REAL_WORLD_TESTING = "real_world_testing" # Art.74(5)
SOURCE_CODE_ACCESS = "source_code_access" # Art.74(6)
INFORMATION_REQUEST = "information_request" # Art.74(7)
CORRECTIVE_MEASURE = "corrective_measure" # Art.74(2)
class InvestigationStatus(Enum):
PENDING = "pending"
ACTIVE = "active"
DOCUMENTATION_PROVIDED = "documentation_provided"
TESTING_IN_PROGRESS = "testing_in_progress"
CORRECTIVE_ACTION_REQUIRED = "corrective_action_required"
CLOSED_COMPLIANT = "closed_compliant"
CLOSED_CORRECTIVE_MEASURE = "closed_corrective_measure"
@dataclass
class MSARequest:
request_id: str
request_type: MSARequestType
initiating_msa: str # e.g., "BNetzA (DE)", "CNIL (FR)"
received_at: datetime
deadline: Optional[datetime] # Art.74(7): MSA may specify response timeframe
description: str
cross_border: bool = False # Art.74(8): involves multiple Member States
gpai_component: bool = False # Art.74(4): AI Office jurisdiction for GPAI
response_provided_at: Optional[datetime] = None
status: InvestigationStatus = InvestigationStatus.PENDING
@dataclass
class MarketSurveillanceTracker:
provider_id: str
system_id: str
requests: list[MSARequest] = field(default_factory=list)
def record_msa_request(
self,
request_id: str,
request_type: MSARequestType,
initiating_msa: str,
description: str,
deadline: Optional[datetime] = None,
cross_border: bool = False,
gpai_component: bool = False,
) -> MSARequest:
"""Record incoming Art.74 MSA request and compute response obligations."""
request = MSARequest(
request_id=request_id,
request_type=request_type,
initiating_msa=initiating_msa,
received_at=datetime.now(),
deadline=deadline or self._default_deadline(request_type),
description=description,
cross_border=cross_border,
gpai_component=gpai_component,
)
self.requests.append(request)
return request
def _default_deadline(self, request_type: MSARequestType) -> datetime:
"""Derive default response deadline by request type (MSR Art.14 framework)."""
if request_type == MSARequestType.CORRECTIVE_MEASURE:
return datetime.now() + timedelta(days=3)
elif request_type in (MSARequestType.SOURCE_CODE_ACCESS, MSARequestType.REAL_WORLD_TESTING):
return datetime.now() + timedelta(days=14)
return datetime.now() + timedelta(days=10)
def record_response(self, request_id: str) -> Optional[MSARequest]:
"""Record that provider has responded to an MSA request (Art.74(7))."""
for req in self.requests:
if req.request_id == request_id:
req.response_provided_at = datetime.now()
req.status = InvestigationStatus.DOCUMENTATION_PROVIDED
return req
return None
def overdue_requests(self) -> list[MSARequest]:
"""Return requests where deadline has passed and no response provided."""
now = datetime.now()
return [
r for r in self.requests
if r.deadline and now > r.deadline
and r.response_provided_at is None
and r.status not in (
InvestigationStatus.CLOSED_COMPLIANT,
InvestigationStatus.CLOSED_CORRECTIVE_MEASURE,
)
]
def active_cross_border_investigations(self) -> list[MSARequest]:
"""Art.74(8): Return investigations involving multiple Member States."""
return [r for r in self.requests if r.cross_border and r.status == InvestigationStatus.ACTIVE]
def gpai_ai_office_requests(self) -> list[MSARequest]:
"""Art.74(4): Return requests from AI Office for GPAI component issues."""
return [r for r in self.requests if r.gpai_component]
def pending_source_code_requests(self) -> list[MSARequest]:
"""Art.74(6): Return outstanding source code access requests."""
return [
r for r in self.requests
if r.request_type == MSARequestType.SOURCE_CODE_ACCESS
and r.response_provided_at is None
]
def investigation_summary(self) -> dict:
"""Art.74 compliance dashboard for provider QMS integration."""
return {
"provider_id": self.provider_id,
"system_id": self.system_id,
"total_requests": len(self.requests),
"by_type": {
t.value: len([r for r in self.requests if r.request_type == t])
for t in MSARequestType
},
"overdue": len(self.overdue_requests()),
"cross_border_active": len(self.active_cross_border_investigations()),
"gpai_ai_office": len(self.gpai_ai_office_requests()),
"source_code_pending": len(self.pending_source_code_requests()),
"open_corrective_measures": len([
r for r in self.requests
if r.status == InvestigationStatus.CORRECTIVE_ACTION_REQUIRED
]),
"generated_at": datetime.now().isoformat(),
}
Art.74 in the Series: Chapter VIII Market Surveillance
| Article | Topic | Status |
|---|---|---|
| Art.57 | National Competent Authorities | Guide |
| Art.58 | NCA Investigative Powers | Guide |
| Art.59 | European AI Board | Guide |
| Art.60 | EU AI Database | Guide |
| Art.61 | Scientific Panel | Guide |
| Art.62 | AI Office Enforcement Powers | Guide |
| Art.63 | Advisory Forum | Guide |
| Art.64 | Access to Data and Documentation | Guide |
| Art.65 | Serious Incident Reporting | Guide |
| Art.66 | Market Surveillance Information Exchange | Guide |
| Art.67 | Union Safeguard Procedure | Guide |
| Art.68 | AI Regulatory Sandboxes | Guide |
| Art.69 | Codes of Conduct | Guide |
| Art.70 | Penalties | Guide |
| Art.71 | Exercise of the Delegation | Guide |
| Art.72 | Post-Market Monitoring | Guide |
| Art.73 | Obligations of Deployers | Guide |
| Art.74 | Market Surveillance and Control | This guide |
| Art.75 | Mutual Assistance and GPAI Supervision | Guide |
10-Item Art.74 Compliance Checklist
Use this checklist to verify your Art.74 market surveillance readiness posture:
- Technical documentation package — Full Art.11 technical documentation is maintained, version-controlled, and producible to an MSA on short notice. All documentation is accessible to the designated compliance point of contact without requiring IT escalation.
- Incident log producibility — Provider incident logs (Art.72 post-market monitoring data) and deployer incident records (Art.73 notifications received) are retrievable in a structured format that can be provided to an MSA within the typical 10-day response window under Art.74(7).
- Real-world testing preparedness — Provider has an established procedure for granting MSA access to the system in real operational conditions (Art.74(5)), including a designated technical liaison, an access provisioning workflow, and a GDPR legal basis for the MSA's data access during testing.
- Source code access protocol — Provider has a documented procedure for complying with Art.74(6) source code requests, including identifying the scope of code to be produced, designating personnel with repository access, and preserving trade secret confidentiality under Art.78.
- Cooperation obligation designation — Provider has designated a single point of contact for Art.74(7) cooperation, with documented authority to commit the organisation to cooperation obligations and escalation paths to legal and technical teams.
- GPAI component tracking — Where the AI system integrates a GPAI model, provider tracks which model version is integrated, monitors AI Office evaluation proceedings for that model (Art.74(4)), and has a re-evaluation procedure triggered by AI Office findings affecting the GPAI component.
- Cross-border investigation procedure — Where the AI system is deployed in multiple Member States, provider has procedures for responding to Art.74(8) cross-border investigations, including designated contacts for each Member State's MSA and a process for ensuring consistent information production across jurisdictions.
- CLOUD Act infrastructure assessment — All technical documentation, model infrastructure, training data, and source code repositories have been assessed for CLOUD Act exposure. Sensitive operational data is hosted on EU-jurisdiction infrastructure to eliminate dual-compellability scenarios.
- Customs documentation readiness — For AI systems embedded in physical products exported to or placed on the EU market from third countries, Art.47 Declaration of Conformity and Art.60 EU AI Database registration details are available for customs authority inspection under Art.74(9).
- Corrective measure response capacity — Provider has a documented emergency procedure for responding to Art.74(2) corrective measures (market withdrawal, restriction orders) within the specified timeframe, including designated decision-making authority, communication procedures for affected deployers, and evidence preservation protocols.