EU AI Act Art.90: AI Office Power to Request Information from GPAI Providers — Developer Guide (2026)
Article 89 gives you a voice before enforcement action is taken. Article 90 is what triggers the conversation.
Before any inspection, interim measure, or fine against a provider of a general-purpose AI model, the AI Office typically starts with an information request under Article 90. It is the diagnostic step: gather documents, evaluate compliance posture, identify gaps, then decide whether to escalate. For developers and legal teams at GPAI providers — companies whose foundation models underpin downstream applications across the EU — Article 90 is the first signal that the AI Office has your model in scope.
Responding poorly to an Art.90 request, or failing to respond at all, compounds the original compliance risk and creates independent Art.99 fine exposure. Responding well — with complete, accurate, well-organized documentation — can resolve an investigation before it reaches the formal enforcement stage.
What Article 90 Actually Says
Article 90 grants the AI Office the power to request information from providers of general-purpose AI models for the purpose of supervising and enforcing the Regulation:
Article 90(1) — Core information request power:
For the purpose of the supervision and enforcement tasks assigned to it under this Regulation, the AI Office may, by simple request or by decision, require providers of general-purpose AI models, and natural or legal persons acting on their behalf, to provide all necessary information and documentation.
"Simple request" versus "by decision" is a meaningful distinction. A simple request carries no immediate penalty for non-response — but a failure to respond in good faith can prompt a formal decision, which does. A decision under Art.90(1) triggers the full coercive regime including Art.99 sanctions.
Article 90(2) — What information can be requested:
The AI Office may request:
- (a) Technical documentation as specified in Annex XI (GPAI models) or Annex XII (GPAI models with systemic risk)
- (b) Information about training data, training methodologies, and compute used
- (c) Evaluation results, including red-teaming outcomes, capability assessments, and benchmark scores
- (d) Model cards, system cards, and instructions for use
- (e) Measures taken to implement the Code of Practice obligations
- (f) Information about downstream access and API usage that would help assess systemic risk
Article 90(3) — Third-party information:
Where necessary for assessing the potential systemic risk of a general-purpose AI model with systemic risk, the AI Office may request information from downstream providers, operators, or providers of the underlying AI infrastructure.
This extends the reach beyond the GPAI model provider itself. Cloud providers, fine-tuning operators, and downstream deployers can all receive Art.90(3) requests if the AI Office believes they hold information relevant to systemic risk assessment.
Article 90(4) — Proportionality and confidentiality:
The AI Office shall ensure the confidentiality of commercially sensitive information received. It shall not request information beyond what is necessary for the purposes of the supervisory or enforcement action.
Proportionality is not merely a courtesy — it is a legal constraint on the AI Office's conduct. An overly broad information request is challengeable before the General Court of the EU.
Article 90(5) — Content requirements for requests:
Every Art.90 request must specify:
- The legal basis and purpose of the request
- What information or documentation is required
- The time limit within which the information must be provided
- The consequences of failing to provide accurate information
Requests that lack these elements are procedurally defective. In practice, well-advised GPAI providers point this out in their response rather than simply non-complying.
Article 90(6) — Response obligations:
Providers are required to respond within the time limit set, providing information that is:
- Accurate and not misleading
- Complete (not selectively omitting material facts)
- In the format requested (or a reasonably equivalent format if the requested format is technically impractical)
Types of Information the AI Office Targets
Art.90 requests in practice cluster around four categories:
Category 1: Technical Documentation (Annex XI / XII)
Annex XI lists the mandatory technical documentation for GPAI models:
- General description of the model (intended purpose, capabilities, limitations)
- Architecture and training process description
- Training data sources and filtering criteria
- Compute resources used (FLOPs, hardware, training duration)
- Training objectives and evaluation metrics
- Behavioural testing and safety evaluation results
Annex XII adds requirements for models designated with systemic risk:
- Adversarial testing and red-teaming results
- Detailed capability evaluations against the EU scientific panel's benchmarks
- Incident reports and near-miss logs
- Measures taken to mitigate systemic risks identified under Art.55
If your Annex XI/XII documentation is complete, organized, and version-controlled, responding to a Category 1 request is a retrieval exercise. If it isn't, you are simultaneously trying to assemble compliance evidence while under investigative scrutiny — a position that makes it very difficult to demonstrate good-faith compliance.
Category 2: Training Data Provenance
The AI Office is particularly interested in:
- Data sources and provenance documentation (web scrape policies, licensed datasets, user-generated content)
- Copyright compliance measures (Art.53(1)(c) opt-out compliance)
- Personal data filtering and anonymization measures
- Bias and quality filtering methodologies
- Data retention and deletion policies
GDPR intersects here: training data that included personal data triggers a parallel EU supervisory authority investigation track. Your Art.90 response must be calibrated so that disclosures to the AI Office do not inadvertently create GDPR Art.83 exposure for the same dataset.
Category 3: Evaluation and Testing Results
The AI Office can request:
- Internal evaluation results including failure modes and capability scores
- External third-party audits and assessments
- Red-teaming results, including attempts to elicit dangerous capabilities
- CBRN (chemical, biological, radiological, nuclear) capability assessments where applicable
- Cybersecurity vulnerability assessments of the model and its API
- Benchmark results on standard academic and safety-specific evaluation suites
A common tension: companies often conduct internal red-teaming specifically to find and fix problems. Disclosing these results reveals both the problems found and the remediation applied. The Art.90(4) confidentiality obligation on the AI Office is the formal protection here, but in practice GPAI providers structure red-teaming programs with legal privilege considerations from the outset.
Category 4: Incident and Near-Miss Records
For GPAI models with systemic risk, Art.55(1)(d) requires providers to report serious incidents to the AI Office. Art.90 requests frequently ask for:
- The complete incident log (not just reported incidents)
- Root cause analysis for reported incidents
- Remediation measures and timelines
- Downstream provider notifications under Art.53(1)(f)
Procedural Requirements — What Makes a Request Legally Valid
An Art.90 request is only binding if it meets the procedural requirements. This matters because a defective request gives the provider grounds to seek clarification or, if issuing a decision, to challenge it before the General Court.
| Requirement | What to Check | Challenge Grounds |
|---|---|---|
| Legal basis stated | Art.90(1) or Art.90(3) clearly cited | Missing legal basis = procedurally defective |
| Purpose specified | Supervisory vs. enforcement; specific model in scope | Vague purpose may indicate fishing expedition |
| Information scope defined | Specific documents or categories listed | "All information" without specificity → proportionality challenge |
| Time limit given | Reasonable deadline stated | Less than 10 working days → challenge as unreasonable |
| Consequences stated | Art.99 penalties for non-compliance noted | Missing → simple request, not a formal decision |
| Format specified | PDF, structured data, API access | Technically impossible format → request clarification |
Response Timeline — How Fast Must You Move?
The AI Office must set a time limit that is proportionate to the complexity of the information requested and the urgency of the situation. In practice:
| Request Type | Typical Timeline | Notes |
|---|---|---|
| Simple request, standard documentation | 20–30 working days | Starting point for Annex XI/XII documentation |
| Formal decision, systemic risk assessment | 10–20 working days | Shorter when AI Office has urgency grounds |
| Emergency systemic risk investigation | 5–10 working days | Art.93 interim measure context |
| Follow-up clarification request | 5–10 working days | Response already provided; narrow gap to fill |
If the time limit is genuinely insufficient — for example, because the requested evaluation data requires commissioning third-party testing — you can request an extension. Document the request for extension and the grounds in writing. The AI Office may or may not grant it, but the documented request demonstrates good-faith effort to comply.
CLOUD Act Conflict — The Double-Obligation Problem
For GPAI providers whose model weights, training data, and evaluation records live on AWS, Azure, or GCP infrastructure, Art.90 creates a potential conflict with US law that is one of the most significant and underappreciated compliance risks in the GPAI space.
How the Conflict Arises
The US CLOUD Act (Clarifying Lawful Overseas Use of Data Act, 18 U.S.C. § 2713) allows US federal authorities to compel US-headquartered cloud providers and their foreign subsidiaries to produce data stored anywhere in the world, regardless of where that data is physically located.
This creates a three-body problem:
- EU AI Office requests your training data documentation and evaluation results under Art.90
- US DOJ or FBI issues a CLOUD Act subpoena or warrant to AWS/Azure/GCP for the same data (perhaps because the model was used in a context implicating US national security interests)
- Your cloud provider must comply with both — unless it can invoke the bilateral CLOUD Act agreement process, which takes time your enforcement calendar may not accommodate
Jurisdiction Matrix for GPAI Model Assets
| Asset Type | EU AI Office Access | US CLOUD Act Risk | Recommended Approach |
|---|---|---|---|
| Model weights (pre-trained) | Art.90(2)(a), Annex XI | HIGH — stored on US cloud | EU-hosted replica for compliance documentation |
| Training dataset documentation | Art.90(2)(b) | HIGH — provenance docs on US cloud | Separate EU-hosted compliance record |
| Red-teaming results | Art.90(2)(c) | HIGH — typically in US systems | Legal privilege + EU-hosted copy |
| Evaluation benchmarks | Art.90(2)(c) | MEDIUM — often public benchmarks | Low sensitivity, standard disclosure |
| API access logs | Art.90(3) (via infrastructure provider) | HIGH — cloud provider retains | EU-isolated logging for compliance scope |
| Incident reports | Art.55 + Art.90(4) | MEDIUM | EU filing copy mandatory |
Practical Mitigation
The architecture solution is to maintain a compliance data silo on EU-based infrastructure (or a GDPR-adequate jurisdiction) that contains copies of all Art.90-responsive documentation. This silo is:
- Replicated from US-infrastructure primary systems
- Subject to EU data protection law exclusively
- Not accessible to US cloud provider personnel or US law enforcement without EU legal process
This does not eliminate the CLOUD Act risk for your primary US-hosted systems, but it ensures that your Art.90 response capability is independent of US legal proceedings and that the EU AI Office receives documentation from a jurisdiction it can rely on.
Confidentiality and Business Secrets
Art.90(4) requires the AI Office to ensure confidentiality of commercially sensitive information. In practice, GPAI providers should:
Mark sensitive material clearly. Every document you provide should identify sections containing:
- Trade secrets (model architecture details, training hyperparameters, proprietary data sources)
- Commercially sensitive evaluation results (capability scores that inform competitive positioning)
- Legally privileged communications (advice from EU-admitted counsel about compliance)
Submit a confidentiality request alongside the substantive response. This is a separate document listing each section of the response that you consider confidential and the basis for that designation.
Understand the limits. The AI Office may disclose information to other EU institutions, Member State authorities, and in enforcement decisions if it is relevant to the public interest. Your confidentiality claim applies to third parties — competitors, journalists, the general public — not necessarily to the entire EU institutional apparatus. If information reaches the General Court in appeal proceedings, it may enter the public record subject to confidentiality orders.
Non-Compliance Consequences — Art.99 Exposure
Failure to respond to a formal Art.90 decision, or providing inaccurate or incomplete information, triggers Art.99(3):
Non-compliance with a request for information by decision shall be subject to fines of up to 3% of the total worldwide annual turnover of the provider in the preceding financial year, or EUR 15,000,000, whichever is higher.
This is separate from the fines for the underlying compliance failure that triggered the investigation. A GPAI provider can thus face:
- An Art.99 fine for the substantive violation (e.g., failure to implement systemic risk mitigation under Art.55)
- A separate Art.99(3) fine for providing incomplete information in response to the Art.90 information request about that violation
The Art.99(3) exposure is particularly dangerous because it is outcome-independent: even if you prevail on the substantive compliance question, the process violation in responding to the information request can independently attract a fine.
Art.90 in the Enforcement Sequence
Art.90 does not exist in isolation. It sits in a specific enforcement sequence:
Art.90 Information Request
↓ (if gaps or non-compliance identified)
Art.91 Inspection (on-site or remote evaluation)
↓ (if serious non-compliance confirmed)
Art.89 Right to Be Heard (written observations + oral hearing)
↓ (if enforcement measure warranted)
Art.93 Interim Measures (systemic risk) or Art.99 Penalties (fines)
↓ (if provider disputes)
Art.94 Commitments / Settlement → OR → General Court Appeal
Providers who respond comprehensively to Art.90 requests often short-circuit the escalation path. A complete, well-organized Art.90 response that demonstrates active compliance efforts can lead the AI Office to conclude that no further enforcement action is warranted. An incomplete response extends the investigation and often triggers Art.91 inspections.
Python Implementation — Art.90 Response Infrastructure
from dataclasses import dataclass, field
from datetime import date, timedelta
from enum import Enum
from typing import Optional
import hashlib
import json
class RequestType(Enum):
SIMPLE_REQUEST = "simple_request" # No immediate penalty for non-response
FORMAL_DECISION = "formal_decision" # Art.99 penalties apply
class DocumentCategory(Enum):
TECHNICAL_DOCS = "annex_xi_xii"
TRAINING_DATA = "training_data_provenance"
EVALUATION_RESULTS = "evaluation_testing"
INCIDENT_RECORDS = "incident_near_miss"
CODE_OF_PRACTICE = "code_of_practice_measures"
INFRASTRUCTURE = "infrastructure_access"
@dataclass
class Art90Request:
"""Tracks a single AI Office information request under Art.90."""
request_id: str
received_date: date
request_type: RequestType
legal_basis: str # Must be stated in request (Art.90(5))
purpose: str # Must be stated in request (Art.90(5))
deadline: date
categories_requested: list[DocumentCategory]
model_in_scope: str # Which GPAI model is under scrutiny
# Procedural defects (grounds for challenge)
missing_legal_basis: bool = False
vague_purpose: bool = False
disproportionate_scope: bool = False
insufficient_deadline: bool = False # < 10 working days
# Response tracking
extension_requested: bool = False
extension_granted_until: Optional[date] = None
response_submitted: Optional[date] = None
confidentiality_request_submitted: bool = False
def working_days_remaining(self, as_of: date = None) -> int:
"""Count working days until deadline (simplified — excludes public holidays)."""
target = as_of or date.today()
effective_deadline = self.extension_granted_until or self.deadline
count = 0
current = target
while current < effective_deadline:
if current.weekday() < 5: # Monday–Friday
count += 1
current += timedelta(days=1)
return count
def has_procedural_defects(self) -> list[str]:
defects = []
if self.missing_legal_basis:
defects.append("Missing legal basis (Art.90(5)(a))")
if self.vague_purpose:
defects.append("Purpose insufficiently specified (Art.90(5)(b))")
if self.disproportionate_scope:
defects.append("Scope disproportionate to stated purpose (Art.90(4))")
if self.insufficient_deadline and self.request_type == RequestType.FORMAL_DECISION:
defects.append("Deadline below 10-working-day minimum for formal decision")
return defects
def alert_status(self) -> str:
days = self.working_days_remaining()
if self.response_submitted:
return "COMPLETE"
if days <= 3:
return f"CRITICAL — {days} working days remaining"
if days <= 7:
return f"WARNING — {days} working days remaining"
return f"OK — {days} working days remaining"
@dataclass
class Art90DocumentRegistry:
"""Central registry of documents responsive to Art.90 requests.
Maintains checksums to detect tampering and tracks which documents
have been provided to the AI Office in which request.
"""
documents: dict[str, dict] = field(default_factory=dict)
def register_document(
self,
doc_id: str,
title: str,
category: DocumentCategory,
file_path: str,
sensitivity: str, # "public", "confidential", "trade_secret", "privileged"
file_content_bytes: bytes,
jurisdiction: str = "EU", # "EU" or "US_CLOUD" — affects CLOUD Act risk
) -> str:
checksum = hashlib.sha256(file_content_bytes).hexdigest()
self.documents[doc_id] = {
"title": title,
"category": category.value,
"file_path": file_path,
"sensitivity": sensitivity,
"checksum": checksum,
"jurisdiction": jurisdiction,
"provided_in_requests": [],
"cloud_act_risk": jurisdiction == "US_CLOUD" and sensitivity != "public",
}
return checksum
def mark_provided(self, doc_id: str, request_id: str) -> None:
if doc_id in self.documents:
self.documents[doc_id]["provided_in_requests"].append(request_id)
def cloud_act_risk_documents(self) -> list[dict]:
return [
{"doc_id": k, **v}
for k, v in self.documents.items()
if v.get("cloud_act_risk", False)
]
def gap_analysis(self, request: Art90Request) -> list[str]:
"""Identify categories requested but not yet registered in the registry."""
registered_categories = {
doc["category"] for doc in self.documents.values()
}
gaps = []
for cat in request.categories_requested:
if cat.value not in registered_categories:
gaps.append(f"No documents registered for category: {cat.value}")
return gaps
@dataclass
class Art90ResponsePackage:
"""Assembles a structured response to an Art.90 request."""
request: Art90Request
registry: Art90DocumentRegistry
responding_entity: str
legal_counsel_eu_admitted: bool
def confidentiality_submissions(self) -> list[dict]:
"""Documents that require confidentiality designation in the response."""
return [
{"doc_id": k, "sensitivity": v["sensitivity"], "basis": self._confidentiality_basis(v["sensitivity"])}
for k, v in self.registry.documents.items()
if v["sensitivity"] in ("confidential", "trade_secret", "privileged")
]
def _confidentiality_basis(self, sensitivity: str) -> str:
bases = {
"confidential": "Art.90(4) — commercially sensitive business information",
"trade_secret": "Art.90(4) + Trade Secrets Directive 2016/943",
"privileged": "Legal professional privilege — EU Court of Justice case law (AM & S v Commission)",
}
return bases.get(sensitivity, "Art.90(4)")
def validate_completeness(self) -> dict:
gaps = self.registry.gap_analysis(self.request)
cloud_act_risks = self.registry.cloud_act_risk_documents()
defects = self.request.has_procedural_defects()
return {
"gaps": gaps,
"cloud_act_risk_count": len(cloud_act_risks),
"procedural_defects": defects,
"confidential_docs": len(self.confidentiality_submissions()),
"ready_to_submit": len(gaps) == 0,
"warnings": [
f"CLOUD ACT RISK: {len(cloud_act_risks)} document(s) on US-cloud infrastructure"
] if cloud_act_risks else [],
}
def check_art90_response_readiness(
request: Art90Request,
registry: Art90DocumentRegistry,
) -> dict:
"""
Quick readiness check for an Art.90 information request response.
Call this on receipt of a request and again 3 days before the deadline.
"""
defects = request.has_procedural_defects()
gaps = registry.gap_analysis(request)
days_remaining = request.working_days_remaining()
cloud_risks = registry.cloud_act_risk_documents()
return {
"request_id": request.request_id,
"deadline": str(request.deadline),
"days_remaining": days_remaining,
"alert": request.alert_status(),
"procedural_defects_in_request": defects,
"documentation_gaps": gaps,
"cloud_act_risks": len(cloud_risks),
"action_items": [
f"Fill documentation gap: {gap}" for gap in gaps
] + [
"Prepare confidentiality request for commercially sensitive material"
] + ([
f"Challenge procedural defect: {d}" for d in defects
] if defects and request.request_type == RequestType.FORMAL_DECISION else []) + ([
"Evaluate EU-hosted compliance silo for CLOUD Act risk mitigation"
] if cloud_risks else []),
}
35-Item Art.90 Information-Readiness Checklist
Standing Documentation Posture (prepare before any request)
- 1. Annex XI technical documentation maintained, version-controlled, and current
- 2. Annex XII systemic risk documentation maintained if model meets Art.51 designation threshold
- 3. Training data sources documented with provenance chain (dataset name, license, opt-out compliance)
- 4. Training compute resources documented (FLOPs, hardware type, training duration, cloud provider)
- 5. Model architecture description sufficient to satisfy Art.90(2)(a) without exposing trade secrets
- 6. Evaluation results maintained: internal benchmarks, external audits, red-teaming outcomes
- 7. Model card current and accurate (capabilities, known limitations, safety measures)
- 8. Incident log maintained for all serious incidents under Art.55(1)(d)
- 9. Root cause analyses completed for each logged incident
- 10. Code of Practice implementation documented (which commitments, by when, verification evidence)
- 11. EU-hosted compliance data silo established for all Art.90-responsive documentation
- 12. Document sensitivity classifications applied (public / confidential / trade secret / privileged)
- 13. SHA-256 checksums maintained for all documentation to detect tampering
On Receipt of an Art.90 Request
- 14. Verify request contains: legal basis, purpose, information scope, deadline, consequences
- 15. Check for procedural defects (missing elements per Art.90(5))
- 16. Classify request: simple request vs. formal decision (penalty consequences differ)
- 17. Calculate working days remaining to deadline (excluding public holidays)
- 18. Assess whether deadline extension is needed and document grounds
- 19. Submit extension request in writing if deadline is insufficient
- 20. Map requested categories to your document registry
- 21. Perform gap analysis: which requested categories are not yet documented?
- 22. Identify documents on US-cloud infrastructure — assess CLOUD Act conflict risk
- 23. Engage EU-admitted legal counsel if this is a formal decision or systemic risk investigation
- 24. Assess whether legal professional privilege applies to any responsive material
- 25. Check whether a parallel US legal proceeding is underway (CLOUD Act conflict alert)
Preparing the Response
- 26. Draft response cover letter citing Art.90, your responding entity, and request reference
- 27. Prepare confidentiality request listing each sensitive section and its legal basis
- 28. Verify information provided is accurate, complete, and not misleading (Art.90(6))
- 29. Verify format matches request specification (or document why an alternative is necessary)
- 30. Mark all documents with document ID, version, date, and classification
- 31. Check SHA-256 checksums against registry to confirm no post-assembly changes
- 32. Legal counsel review of response before submission
Post-Submission
- 33. Confirm delivery and retain proof of submission (time-stamped acknowledgement)
- 34. Record which documents were provided in which request in your registry
- 35. Monitor for follow-up request or escalation to Art.91 inspection
Practical Takeaways
Three decisions that determine your Art.90 posture more than anything else:
1. Invest in standing documentation. The worst time to assemble Annex XI/XII documentation is during an active information request. Build it once, maintain it continuously, version-control it. When the request arrives, your response is a retrieval operation, not a construction project.
2. Separate EU-hosted compliance records from US-cloud primary systems. The CLOUD Act conflict is real for any GPAI provider using major US cloud infrastructure. Maintain a parallel compliance data silo on EU-jurisdictioned infrastructure (or a GDPR-adequate country) that is your authoritative response source for EU enforcement purposes.
3. Distinguish simple requests from formal decisions immediately. A simple request under Art.90(1) carries no immediate penalty for non-response. A formal decision does. Misclassifying a simple request as binding (and rushing a poorly prepared response) or misclassifying a decision as non-binding (and missing the deadline) are symmetric errors with very different consequences.
See Also
- EU AI Act Art.89: Right to Be Heard in Enforcement Proceedings
- EU AI Act Art.91: AI Office Inspection Powers — On-Site and Remote
- EU AI Act Art.53: GPAI Model Systemic Risk Obligations
- EU AI Act Art.55: Downstream Provider Obligations for GPAI API
- EU AI Act Art.88: Whistleblower Protection and Reporting Breaches
- EU AI Act Art.99: Penalties and Administrative Fines