EU AI Act Art.75: Market Surveillance of General-Purpose AI Models — Developer Guide (2026)
EU AI Act Article 75 establishes specific market surveillance powers for cases where a national market surveillance authority (MSA) or the AI Office investigates a high-risk AI system that incorporates a general-purpose AI (GPAI) model component. Where Art.74(2)(b) grants MSAs the general right to demand access to algorithms and source code from high-risk AI system providers, Art.75 defines what happens when the AI system's intelligence core is a GPAI model — which may be developed by a different entity, hosted on external API infrastructure, and subject to parallel AI Office jurisdiction.
The Art.75 distinction matters architecturally: a developer who builds a high-risk AI system on top of a commercial GPAI API (GPT-4 via Azure OpenAI, Claude via AWS Bedrock, or a Llama derivative on EU-hosted inference) cannot fully satisfy an Art.74(2)(b) algorithm access request by themselves. The GPAI model provider has the source code, the training data, and the capability evaluation results. Art.75 is the mechanism by which this multi-party access problem is resolved — through AI Office coordination, controlled technical review environments, and structured evaluation protocols that allow algorithm assessment without full source code handover.
For developers building on GPAI model APIs, Art.75 defines three obligations: what you must document about the GPAI component in your Annex IV technical file, how you must facilitate AI Office or MSA access to the GPAI provider when your system is investigated, and what the GPAI provider must make available to assessors in a controlled review environment. Understanding Art.75 before system deployment shapes decisions about which GPAI providers to use, which contractual access rights to negotiate, and where model inference infrastructure should be hosted.
This guide covers Art.75(1)–(6) in full, the Art.75 × Art.74(2)(b) procedural distinction, the AI Office coordination pipeline, Scientific Panel involvement under Art.75(5), CLOUD Act jurisdiction risk for GPAI model weights on US-hosted infrastructure, Python implementation for GPAIModelEvaluationRequest and ControlledReviewSession, and the 40-item Art.75 compliance checklist.
Art.75 became applicable on 2 August 2026 as part of the Chapter VIII market surveillance framework. From that date, any high-risk AI system incorporating a GPAI component is subject to Art.75 oversight procedures in addition to Art.74 general MSA powers.
Art.75 at a Glance
| Provision | Content | Developer Impact |
|---|---|---|
| Art.75(1) | MSAs coordinate with AI Office when GPAI model is part of investigated high-risk AI system | Identify GPAI components in Annex IV; facilitate AI Office handoff |
| Art.75(2)(a) | AI Office may request GPAI provider evaluation documentation | Maintain evaluation records in MSA-accessible format |
| Art.75(2)(b) | AI Office may arrange controlled technical review for algorithm access | Prepare controlled review environment; negotiate GPAI provider contract clause |
| Art.75(3) | MSAs may request GPAI information from AI Office when direct access unavailable | Downstream developers: contract GPAI provider facilitation obligation |
| Art.75(4) | AI Office may conduct full GPAI model investigation under Art.74 powers | GPAI providers above systemic risk threshold: direct AI Office investigation exposure |
| Art.75(5) | Scientific Panel (Art.66) assists AI Office in GPAI technical evaluation | GPAI providers: Scientific Panel evaluations may trigger Art.75(4) investigation |
| Art.75(6) | Confidentiality obligations apply to information obtained in Art.75 procedures | Trade secret protection during controlled review (limited by Art.70(3)) |
Art.75(1): MSA Coordination Mandate When GPAI Is Involved
When a national MSA initiates an investigation of a high-risk AI system under Art.74 and identifies that the system incorporates a GPAI model component, Art.75(1) requires the MSA to coordinate with the AI Office before exercising Art.74(2)(b) source code access rights against the GPAI provider.
This coordination requirement reflects the institutional division established in Art.64-70: national MSAs have jurisdiction over high-risk AI systems deployed in their member state; the AI Office has exclusive jurisdiction over GPAI models for the purposes of systemic risk assessment and Chapter V (Art.51-56) enforcement. Art.75(1) is the procedural bridge between these two jurisdictions when they overlap in a single investigation.
What MSA Coordination Means Operationally
When an MSA investigating a high-risk AI system (e.g., a clinical decision support tool using a frontier GPAI API as its reasoning engine) encounters a GPAI component:
- MSA notifies AI Office with description of the GPAI model, provider, and the specific capability under investigation
- AI Office assesses jurisdiction: Is the GPAI component already under active Art.53 review? Is there a Code of Practice adequacy assessment in progress under Art.56?
- AI Office coordinates: Either provides existing evaluation results (if GPAI model is already assessed), conducts new evaluation, or delegates back to MSA with evaluation framework
- MSA receives AI Office output: Technical findings on the GPAI component that the MSA uses in the overall high-risk AI system assessment
Developer obligation: Annex IV technical documentation for a high-risk AI system must identify all GPAI components — provider, model name, version, API endpoint, and specific capabilities used. A technical file that lists only "external LLM API" without specifying the GPAI provider creates an immediate documentation gap that Art.74 investigations will expose and that Art.75(1) coordination requires to resolve.
MSA Jurisdiction vs AI Office Jurisdiction
| Assessment Subject | Primary Authority | Art.75 Role |
|---|---|---|
| High-risk AI system behaviour | National MSA | — |
| GPAI component capabilities | AI Office | Coordination mechanism |
| GPAI systemic risk threshold | AI Office | Exclusive jurisdiction (Art.51-53) |
| GPAI in high-risk AI system | MSA + AI Office | Art.75 joint procedure |
| GPAI in non-high-risk system | AI Office only | Art.74 does not apply |
| GPAI self-standing model | AI Office | Art.75(4) direct investigation |
Art.75(2): Specific Access Rights for GPAI Model Evaluation
Art.75(2) defines the procedural access rights available to the AI Office when evaluating GPAI models. These rights are more specific than the general Art.74(2)(b) provision and reflect the unique technical characteristics of foundation models: they cannot be fully assessed by reviewing static source code because capabilities emerge from training, and risks depend on how the model is accessed and prompted.
Art.75(2)(a): GPAI Evaluation Documentation
The AI Office may require GPAI providers to produce:
- Capability evaluations conducted under Art.53(1)(a) (adversarial testing, red-teaming results, benchmarks)
- Training methodology documentation sufficient to assess how the model was developed and validated
- Systemic risk assessment results where applicable (Art.53(1)(c) cybersecurity evaluation, Art.53(1)(d) energy efficiency)
- Code of Practice compliance documentation (Art.56 CoP adherence records if applicable)
- GPAI technical file submitted to the AI Office under Art.53(1)(e)
- Evaluation benchmarks and test results characterising model capabilities at the relevant version
- Incident reports submitted under Art.53(1)(b) for any past serious incidents
Timeline: AI Office documentation requests under Art.75(2)(a) typically specify a 10 business day response window — the same standard applied by national MSAs under Art.74. GPAI providers must maintain evaluation documentation in a format that enables production within this deadline.
Art.75(2)(b): Controlled Technical Review Environment
Art.75(2)(b) is the provision that most directly addresses the "algorithm access" challenge for foundation models. Rather than requiring full source code handover — which would be disproportionate for a model trained on trillions of tokens and representing potentially billions in R&D — the AI Office may arrange a controlled technical review in which the GPAI provider demonstrates model capabilities and behaviour to AI Office assessors or Scientific Panel members in a supervised environment.
The controlled review environment has several characteristics:
Location and access: The review may occur at the GPAI provider's EU infrastructure, at a designated EU technical evaluation facility, or via secure API access to a dedicated evaluation instance. This API-access mode is what the AUDITOR description refers to as "API-Keys für Algorithmus-Zugriff" (API keys for algorithm access): the AI Office receives time-limited, monitored access to the model through dedicated API credentials that allow behavioural evaluation without exposing underlying weights or source code.
Scope of evaluation: The controlled review allows assessors to:
- Query the model with evaluation prompts designed to test specific capability claims
- Verify that safety mitigations perform as documented (content filtering, refusal behaviours)
- Test robustness under adversarial prompting (CBRN content, manipulation attempts, as per Art.53(1)(a))
- Assess cybersecurity posture under prompt injection and model extraction attempts (Art.53(1)(c))
- Evaluate energy efficiency under representative load (Art.53(1)(d))
- Validate that actual model behaviour matches technical documentation claims
Trade secret protection during access: AI Office assessors operating in a controlled review under Art.75(2)(b) are bound by Art.75(6) confidentiality obligations (read with Art.70). They may not retain copies of model responses beyond the evaluation period, may not publish raw evaluation results without redacting competitively sensitive data, and may not share evaluation access with parties outside the Scientific Panel.
Developer impact for downstream builders: If you build a high-risk AI system using a GPAI API, your contract with the GPAI provider must include a clause requiring the provider to facilitate Art.75(2)(b) controlled review when requested by the AI Office in connection with an investigation of your AI system. Failure to include this clause may make it impossible to satisfy Art.21 cooperation obligations when an Art.74 investigation demands access to the GPAI component.
Art.75(3): MSA Information Access via AI Office Coordination
Art.75(3) addresses the practical gap that emerges when a national MSA needs GPAI component information for a high-risk AI system investigation but cannot directly compel the GPAI provider (which may not be established in that member state) to produce it.
Under Art.75(3), the national MSA may request equivalent information from the AI Office, which — having investigative powers over all GPAI providers regardless of EU establishment — can obtain and relay the information needed for the MSA's investigation.
National MSA Investigation
→ MSA identifies GPAI component in high-risk AI system
→ MSA requests Art.75(3) coordination from AI Office
→ AI Office contacts GPAI provider (Art.75(2)(a) demand)
→ GPAI provider responds to AI Office within 10 business days
→ AI Office relays relevant findings to requesting MSA
→ MSA completes high-risk AI system conformity assessment
Key limitation: Art.75(3) does not allow the MSA to circumvent GPAI provider confidentiality protections by routing through the AI Office. Information relayed under Art.75(3) is subject to Art.70 confidentiality obligations and trade secret protections. The MSA receives evaluation findings relevant to the high-risk AI system investigation — not raw training data or model weights.
Art.75(4): AI Office Direct GPAI Investigation
When the AI Office determines that a GPAI model may have systemic risk — whether triggered by Art.75(1) MSA coordination, Art.53(1)(b) incident reports, or AI Office own-initiative assessment — Art.75(4) confirms that the AI Office may conduct a full investigation of the GPAI model using Art.74 market surveillance powers applied at the GPAI model level.
The AI Office's investigative powers under Art.75(4) parallel national MSA powers but apply to GPAI models directly:
| Power | National MSA (Art.74) | AI Office (Art.75(4)) |
|---|---|---|
| Documentation access | High-risk AI system Annex IV | GPAI technical file + Art.53 evaluations |
| Algorithm access | High-risk AI system source code | GPAI model via controlled review (Art.75(2)(b)) |
| Physical access | Provider premises in member state | AI Office coordinates with any MSA for EU premises |
| Corrective measures | Use restriction, market withdrawal (Art.74(3)) | Corrective action on GPAI provider + Chapter V enforcement |
| Provisional measures | Art.74(9) for imminent serious risk | Art.75(4) + Art.74(9) for GPAI systemic risk emergency |
| Cross-EU notification | Via RAPEX (Art.74(7)) | AI Office notifies all NCAs and MSAs directly |
| Non-cooperation fine | Art.99(5) €15M/3% | Art.99(5) €15M/3% (same provision) |
Art.75(5): Scientific Panel Role in GPAI Algorithm Assessment
Art.75(5) formalises the role of the Scientific Panel (Art.66) in supporting AI Office technical evaluations under Art.75. The Scientific Panel's mandate in this context includes:
- Conducting capability evaluations of GPAI models on behalf of the AI Office
- Advising on evaluation methodology: appropriate benchmarks, red-teaming protocols, and test sets for specific GPAI capabilities
- Reviewing controlled review results to produce independent assessment conclusions
- Identifying systemic risk threshold indicators that should trigger Art.75(4) full investigation
Scientific Panel Evaluation Toolkit
| Evaluation Type | What It Tests | Art.75(2)(b) Mechanism |
|---|---|---|
| Adversarial probing | CBRN content elicitation resistance | Controlled API access with offensive prompts |
| Manipulation resistance | Persuasion, impersonation, disinformation | Automated red-teaming via evaluation API |
| Autonomous capability assessment | Agent-mode task completion, code execution | Sandboxed execution environment |
| Cybersecurity posture | Model extraction resistance, prompt injection | Security evaluation framework (Art.53(1)(c)) |
| Energy efficiency | Compute per query, training carbon footprint | Provider-supplied metrics + audit verification |
| Factual accuracy | Hallucination rate on standardised benchmarks | Public benchmark evaluation + own test set |
The Scientific Panel findings under Art.75(5) are non-binding advisories to the AI Office. However, a negative Scientific Panel evaluation creates strong presumptive evidence of systemic risk under Art.51(2), which triggers mandatory Art.53 compliance obligations and supports Art.75(4) full investigation initiation.
Art.75 × Art.74(2)(b): Powers vs Procedures
The relationship between Art.74(2)(b) and Art.75 is best understood as the same access right, exercised through different procedures depending on whether the subject is a high-risk AI system or a GPAI model:
| Dimension | Art.74(2)(b) | Art.75 |
|---|---|---|
| Subject | High-risk AI system source code and algorithms | GPAI model algorithms and training data |
| Authority | National MSA | AI Office (± national MSA coordination) |
| Access mechanism | Direct demand to AI system provider | Controlled review environment; AI Office intermediation |
| Who responds | High-risk AI system developer | GPAI model provider |
| When triggered | Any MSA investigation of high-risk AI system | GPAI component embedded in investigated system; GPAI systemic risk concern |
| Trade secrets | Art.70 applies | Art.75(6) + Art.70 apply |
| Enforcement backstop | Art.99(5) €15M/3% | Art.99(5) €15M/3% |
| Cross-border mechanism | National MSA territory | AI Office pan-EU jurisdiction |
Developer action for high-risk AI systems using GPAI APIs: Document in your Annex IV technical file:
- The GPAI provider identity, model name, version, and API endpoint
- The specific GPAI capabilities deployed in your system (text generation, summarisation, code completion, etc.)
- The Art.75(1) coordination pathway — that you will provide MSA with GPAI provider contact details upon request
- The contractual basis for Art.75(2)(b) controlled review facilitation (copy of relevant contract clause)
Art.75 × Art.53: GPAI Obligations That Trigger Art.75 Investigations
Art.75 investigations are typically triggered by failures in Art.53 GPAI systemic risk obligations. The investigation trigger pipeline:
| Art.53 Obligation Failure | Typical Trigger Source | Art.75 Response |
|---|---|---|
| Art.53(1)(a) adversarial testing incomplete | Incident report showing harmful capability | Art.75(4) AI Office evaluation of testing methodology |
| Art.53(1)(b) incident not reported | Downstream deployer complaint; MSA referral | Art.75(4) full investigation; Art.75(3) MSA coordination |
| Art.53(1)(c) cybersecurity assessment inadequate | Security researcher disclosure | Art.75(2)(b) controlled review of cybersecurity posture |
| Art.53(1)(d) energy efficiency documentation absent | Systemic risk threshold screening | Art.75(2)(a) documentation demand |
| Art.53(1)(e) GPAI technical file incomplete | AI Office own-initiative review | Art.75(2)(a) technical file supplementation demand |
| Art.56 Code of Practice adequacy challenged | AI Office adequacy reassessment | Art.75(5) Scientific Panel technical evaluation |
CLOUD Act × Art.75: The GPAI Model Weight Problem
Art.75 creates a particularly acute jurisdiction problem for GPAI providers who host model weights and training data on US cloud infrastructure. The problem is structural and has no easy workaround under current CLOUD Act (18 U.S.C. § 2713) and EU law:
The Dual-Compellability Chain
EU AI Office demands GPAI model evaluation (Art.75(2)(b)):
→ GPAI provider must facilitate controlled review in EU evaluation environment
→ But model weights, training checkpoints, evaluation infra hosted on AWS/Azure/GCP (US jurisdiction)
→ US Department of Justice may independently subpoena model weights under CLOUD Act
→ Two simultaneous legal obligations from two legal systems on the same model weights
→ EU Art.75(6) trade secret protection does NOT protect against US CLOUD Act warrant
→ EU Art.70 confidentiality does NOT protect against CLOUD Act extraterritorial subpoena
Six GPAI Data Categories — Jurisdiction Risk Assessment
| Data Category | EU Art.75 Obligation | US CLOUD Act Risk | Risk Level |
|---|---|---|---|
| Model weights (trained, quantized) | Controlled review access | Full extraction possible under warrant | Critical |
| Pre-training data samples | Methodology documentation only | Direct subpoena if stored on US infra | High |
| RLHF / RLAIF preference data | Evaluation documentation | Direct subpoena | High |
| Adversarial test set results | Mandatory Art.53(1)(a) docs | Indirect access via provider records | Medium |
| Art.53(1)(b) incident logs | Mandatory reporting records | Subpoena under CLOUD Act | Medium |
| Energy efficiency compute metrics | Art.53(1)(d) metrics | Low competitive sensitivity | Low |
EU-Native GPAI Infrastructure as Art.75 Risk Mitigation
For GPAI providers and for downstream developers who operate fine-tuned models built on GPAI base weights:
EU-hosted model weights = single-regime legal order for Art.75 compliance. When model weights are stored and inference is executed on EU infrastructure (within the EU's territorial sovereignty, not merely EU-region zones of US-incorporated hyperscalers), they are subject exclusively to EU legal demands — Art.75 AI Office requests, GDPR data subject rights, national court orders. US CLOUD Act extraterritorial claims do not apply to infrastructure that is not subject to US jurisdiction through ownership, control, or physical location.
This means:
- An AI Office controlled review under Art.75(2)(b) for a GPAI model on EU infrastructure requires only EU legal process
- No dual-compellability conflict exists between Art.75(6) EU confidentiality and CLOUD Act warrant
- Trade secret protection under Art.75(6) + Art.70 operates cleanly without CLOUD Act override risk
For downstream high-risk AI system developers who choose a GPAI provider with EU-hosted inference:
- Art.75 investigations of the GPAI component stay in the EU legal order
- Art.21 cooperation obligations are fully satisfiable without cross-border legal conflict
- CLOUD Act confidential subpoena risk eliminated for the GPAI model evaluation data
Python Implementation
GPAIModelEvaluationRequest
from dataclasses import dataclass, field
from datetime import date, timedelta
from enum import Enum
from typing import Optional
class EvaluationRequestType(Enum):
DOCUMENTATION_DEMAND = "documentation" # Art.75(2)(a)
CONTROLLED_REVIEW = "controlled_review" # Art.75(2)(b)
MSA_COORDINATION = "msa_coordination" # Art.75(3)
FULL_INVESTIGATION = "full_investigation" # Art.75(4)
class GPAIInfrastructureJurisdiction(Enum):
EU_NATIVE = "eu_native" # Single-regime, no CLOUD Act risk
EU_SOVEREIGN_CLOUD = "eu_sovereign" # Hyperscaler EU sovereign zone — verify CLOUD Act applicability
US_CLOUD = "us_cloud" # Dual-compellability risk
MIXED = "mixed" # Jurisdiction mapping required
@dataclass
class GPAIModelEvaluationRequest:
"""
Represents an AI Office evaluation request under Art.75 EU AI Act.
Tracks compliance with Art.75(2)(a) documentation demands and
Art.75(2)(b) controlled review scheduling obligations.
"""
request_id: str
gpai_provider: str
model_name: str
model_version: str
request_type: EvaluationRequestType
request_date: date
triggering_article: str # e.g. "Art.75(1)", "Art.75(4)"
related_high_risk_system: Optional[str] = None # Annex IV system identifier
infrastructure_jurisdiction: GPAIInfrastructureJurisdiction = GPAIInfrastructureJurisdiction.MIXED
def response_deadline(self) -> date:
"""
Standard Art.75 response deadlines:
- Documentation demand (Art.75(2)(a)): 10 business days
- Controlled review scheduling (Art.75(2)(b)): 20 business days
- Full investigation (Art.75(4)): 15 business days for initial response
"""
if self.request_type == EvaluationRequestType.DOCUMENTATION_DEMAND:
business_days = 10
elif self.request_type == EvaluationRequestType.CONTROLLED_REVIEW:
business_days = 20 # Scheduling window for evaluation environment setup
elif self.request_type == EvaluationRequestType.FULL_INVESTIGATION:
business_days = 15
else:
business_days = 10
result = self.request_date
days_added = 0
while days_added < business_days:
result += timedelta(days=1)
if result.weekday() < 5: # Monday–Friday
days_added += 1
return result
def is_overdue(self, today: Optional[date] = None) -> bool:
check_date = today or date.today()
return check_date > self.response_deadline()
def requires_scientific_panel(self) -> bool:
"""Art.75(5): Scientific Panel involvement triggered for capability evaluation."""
return self.request_type in (
EvaluationRequestType.CONTROLLED_REVIEW,
EvaluationRequestType.FULL_INVESTIGATION,
)
def cloud_act_risk_assessment(self) -> dict:
"""
Assess dual-compellability risk under CLOUD Act × Art.75.
Returns risk level, explanation, and required action.
"""
if self.infrastructure_jurisdiction == GPAIInfrastructureJurisdiction.EU_NATIVE:
return {
"risk_level": "low",
"explanation": (
"EU-native infrastructure: single-regime legal order. "
"Art.75 AI Office demands apply exclusively. No CLOUD Act exposure."
),
"action": "Standard Art.75 cooperation protocol.",
}
elif self.infrastructure_jurisdiction == GPAIInfrastructureJurisdiction.EU_SOVEREIGN_CLOUD:
return {
"risk_level": "medium",
"explanation": (
"EU Sovereign Cloud zone declared by hyperscaler — verify actual CLOUD Act "
"applicability. US-incorporated entity control may still create extraterritorial exposure."
),
"action": "Obtain legal opinion confirming CLOUD Act non-applicability for model weights.",
}
elif self.infrastructure_jurisdiction == GPAIInfrastructureJurisdiction.US_CLOUD:
return {
"risk_level": "critical",
"explanation": (
"US-hosted infrastructure: dual-compellability risk confirmed. "
"CLOUD Act subpoena may demand same model weights as Art.75 controlled review. "
"Art.75(6) EU trade secret protection does NOT override CLOUD Act."
),
"action": (
"Engage EU and US legal counsel immediately. "
"Initiate model weight migration to EU infrastructure. "
"Implement two-counsel response protocol for simultaneous demands."
),
}
else:
return {
"risk_level": "high",
"explanation": "Mixed jurisdiction: some assets under US jurisdiction. Full mapping needed.",
"action": "Complete data residency audit. Identify US-hosted model weights. Prioritise migration.",
}
ControlledReviewSession
@dataclass
class ControlledReviewSession:
"""
Manages a controlled technical review under Art.75(2)(b).
Documents preparation, execution, and confidentiality obligations.
"""
session_id: str
evaluation_request: GPAIModelEvaluationRequest
environment_type: str # "provider_eu_premises" | "eu_evaluation_facility" | "secure_api"
api_key_issued: bool = False
api_key_expiry: Optional[date] = None
scientific_panel_involved: bool = False
evaluation_scope: list = field(default_factory=list)
def standard_evaluation_scope(self) -> dict:
"""Standard Art.75(2)(b) controlled review scope per Art.53(1) obligations."""
return {
"adversarial_testing": {
"article": "Art.53(1)(a)",
"method": "Controlled API access with adversarial prompt suite",
"targets": ["CBRN elicitation", "manipulation resistance", "autonomous capability"],
},
"cybersecurity_posture": {
"article": "Art.53(1)(c)",
"method": "Prompt injection, model extraction, jailbreak resistance",
"targets": ["instruction hierarchy robustness", "prompt exfiltration resistance"],
},
"energy_efficiency": {
"article": "Art.53(1)(d)",
"method": "Representative load benchmark",
"targets": ["compute per query", "training footprint documentation audit"],
},
"safety_mitigation_verification": {
"article": "Art.53(1)(a) + Technical File",
"method": "Content filter bypass testing vs documentation claims",
"targets": ["refusal rate", "consistency under rephrasing"],
},
"capability_boundary_assessment": {
"article": "Art.51(2) systemic risk threshold",
"method": "Benchmark evaluation (MMLU, HumanEval, GPQA, or equivalent)",
"targets": ["claimed vs measured capability levels"],
},
}
def confidentiality_obligations(self) -> list[str]:
"""Art.75(6) obligations binding AI Office assessors during and after session."""
return [
"No retention of model responses beyond defined evaluation window",
"No publication of raw results without provider redaction review",
"No sharing of API access credentials outside Scientific Panel membership",
"Art.70 professional secrecy applies to all evaluation findings",
"Trade secret markings by provider respected unless Art.70(3) public interest exception applies",
"Controlled review report to be shared with AI Office only; MSA receives findings summary",
]
def provider_preparation_checklist(self) -> list[str]:
"""What the GPAI provider must prepare for Art.75(2)(b) controlled review."""
return [
"Dedicated evaluation API instance isolated from production",
"Time-limited API credentials for AI Office assessors (expiry at session close)",
"Technical staff designated and available during review for clarification",
"Art.53(1)(a) adversarial testing results pre-submitted for comparison",
"System card and capability documentation per model version",
"Energy efficiency baseline metrics pre-submitted",
"Incident log excerpt covering 12-month pre-review period",
"Evaluation session scope agreement signed before access granted",
]
def schedule_summary(self) -> dict:
"""Returns scheduling milestones for Art.75(2)(b) controlled review."""
request_date = self.evaluation_request.request_date
return {
"day_0": f"AI Office evaluation request received ({request_date})",
"day_5": "GPAI provider acknowledges request; designates technical contact",
"day_10": "Evaluation scope and methodology agreed",
"day_15": "Dedicated API instance provisioned; credentials issued to AI Office",
"day_20": f"Review session completed (deadline: {self.evaluation_request.response_deadline()})",
"day_30": "AI Office preliminary findings shared with GPAI provider for trade secret review",
"day_45": "Final Art.75 evaluation report completed; relevant findings relayed to MSA",
}
40-Item Art.75 Compliance Checklist
Section 1: GPAI Component Documentation (High-Risk AI System Providers)
- 1. Annex IV technical file identifies all GPAI model components with provider, model name, version, and API endpoint
- 2. Specific GPAI capabilities deployed in AI system documented with intended purpose
- 3. Art.75(1) coordination pathway documented: how MSA will be informed of GPAI component identity upon request
- 4. GPAI provider contact details for Art.75 AI Office coordination maintained and current
- 5. Contract with GPAI provider includes Art.75(2)(b) controlled review facilitation clause
- 6. Art.55(1) downstream obligations tracked: GPAI provider disclosure (training data summary, capabilities) received
- 7. GPAI model version lock policy: which version is covered by conformity assessment; update procedure defined
- 8. Substantial modification procedure: assessment of whether GPAI model update triggers new conformity assessment
Section 2: GPAI Provider Obligations (GPAI Model Providers)
- 9. Art.53(1)(a) adversarial testing completed; results maintained in MSA-producible format
- 10. Art.53(1)(b) incident reporting procedure operational for serious incidents
- 11. Art.53(1)(c) cybersecurity assessment completed and documented with evaluation methodology
- 12. Art.53(1)(d) energy efficiency data recorded and included in GPAI technical file
- 13. Art.53(1)(e) GPAI technical file submitted to AI Office; version control maintained
- 14. Art.75(2)(a) documentation response procedure: 10 business day deadline compliance verified
- 15. Evaluation documentation package assembled, version-controlled, and retrieval-tested
- 16. Dedicated evaluation API instance procedure documented and tested at least annually
Section 3: Controlled Review Environment Preparation (Art.75(2)(b))
- 17. Evaluation API instance architecture documented (isolation from production confirmed)
- 18. Time-limited API key issuance and revocation procedure tested end-to-end
- 19. Evaluation scope pre-agreement process defined: who authorises scope within organisation
- 20. Technical staff designated for controlled review support; backup designated
- 21. Trade secret marking procedure for evaluation documentation established and applied
- 22. Confidentiality agreement template reviewed against Art.75(6) requirements
- 23. Evaluation session log retention period and storage location defined
- 24. Scientific Panel coordination contact designated at GPAI provider level
Section 4: CLOUD Act × Art.75 Infrastructure Risk
- 25. Infrastructure jurisdiction mapped: model weights, training data, evaluation infra hosting jurisdiction documented
- 26. US-hosted assets identified: GPAI components on AWS/Azure/GCP US-jurisdiction infra flagged
- 27. Dual-compellability risk assessed per data category (model weights / training data / incident logs)
- 28. EU legal counsel briefed on Art.75 obligations for GPAI services facing EU market
- 29. US legal counsel briefed on CLOUD Act × Art.75 intersection for US-incorporated entities
- 30. Model weight migration plan to EU infrastructure documented if critical risk identified
- 31. EU-native inference infrastructure assessment completed for Art.75 single-regime compliance
- 32. GPAI provider infrastructure commitments verified against actual terms of service and SLA
Section 5: Art.75 Investigation Response Protocol
- 33. Art.75(1) coordination: response procedure when MSA contacts about GPAI component documented
- 34. Art.75(2)(a) documentation demand: 10 business day response timeline tracked in compliance calendar
- 35. Art.75(2)(b) controlled review scheduling: 20 business day window for environment setup confirmed feasible
- 36. Art.75(3) MSA-via-AI-Office coordination: inter-authority request handling procedure assigned
- 37. Art.75(4) full investigation response: legal team briefing procedure documented; trigger criteria defined
- 38. Art.99(5) non-cooperation awareness: technical and legal staff informed of €15M/3% non-cooperation fine
- 39. Art.75(6) confidentiality obligations: assessor confidentiality undertaking obtained before API access granted
- 40. Post-investigation improvement loop: Art.75 findings feed back into Art.53 compliance review
See Also
- EU AI Act Art.74: Market Surveillance Authority Powers — Developer Guide — General MSA powers that Art.75 supplements for GPAI components in high-risk AI systems
- EU AI Act Art.73: Serious Incident Reporting — Developer Guide — Art.53(1)(b) incident reports that trigger Art.75 AI Office investigations
- EU AI Act Art.53: GPAI Model Systemic Risk Obligations — Developer Guide — GPAI obligations whose failures trigger Art.75 investigation pipeline
- EU AI Act Art.64-70: EU AI Office & Governance — Developer Guide — AI Office institutional structure that exercises Art.75 market surveillance powers
- EU AI Act Art.72: Post-Market Monitoring Plan — Developer Guide — PMM data that Art.75 investigations may demand via Art.75(2)(a)