EU AI Act Art.64: Access to Data and Documentation — Market Surveillance Authority Powers for High-Risk AI Enforcement (2026)
EU AI Act Article 64 is the enforcement enabler of the Chapter IX governance framework. Where Articles 57–63 establish the institutional architecture — national competent authorities (Art.57), their investigation powers (Art.58), the AI Board (Art.59), the EU AI database (Art.60), the Scientific Panel (Art.61), the AI Office (Art.62), and the Advisory Forum (Art.63) — Art.64 provides the foundational legal authority without which none of those bodies could conduct meaningful conformity oversight: the right to access the data, documentation, and source code that constitute a high-risk AI system.
Conformity assessment for high-risk AI is not a documentation exercise. It requires direct inspection of the training datasets that shape model behaviour, the validation processes that establish system limits, the technical documentation that records design decisions, and — in cases where surface-level inspection is insufficient — the source code that implements those decisions. Art.64 is the statutory basis that converts market surveillance authority investigation rights (established in Art.58) into concrete access powers against which providers and deployers have corresponding cooperation obligations.
For developers and providers of high-risk AI systems — and for EU-sovereign infrastructure providers whose platforms host such systems — Art.64 defines exactly what regulators can demand, under what conditions they can demand it, what procedural safeguards limit that demand, and what obligations you carry to make that access possible.
Art.64 became applicable on 2 August 2025 as part of the phased entry into force of Regulation (EU) 2024/1689.
Art.64 in the Chapter IX Enforcement Architecture
Art.64 sits at the transition point between governance establishment and governance operation. The Chapter IX progression is:
| Article | Function | Relationship to Art.64 |
|---|---|---|
| Art.57 | NCA designation and independence requirements | Art.64 access powers are exercised by the Art.57-designated authorities |
| Art.58 | Market surveillance investigation powers | Art.64 specifies the data/documentation access component of Art.58 powers |
| Art.59 | European AI Board | AI Board receives enforcement intelligence gathered under Art.64 |
| Art.60 | EU AI database | Art.64 access reveals whether database registrations are accurate |
| Art.61 | Scientific Panel | Scientific Panel uses Art.64-accessed data for technical expert evaluation |
| Art.62 | AI Office enforcement powers (GPAI) | Art.62 AI Office corrective measures use Art.64 access in GPAI enforcement |
| Art.63 | Advisory Forum | Advisory Forum input shapes how Art.64 access powers are exercised in practice |
| Art.64 | Data and documentation access — the enforcement substrate | Enables all preceding bodies to function |
The critical point: Art.64 access rights extend to high-risk AI systems under Chapter III. The AI Office's parallel powers under Art.62 cover GPAI models under Chapter V. Together they establish comprehensive data access for the full scope of regulated AI systems.
Art.64(1): Full Access to Training Data, Test Data, and Technical Documentation
Art.64(1) grants market surveillance authorities full access to:
- Training datasets used to develop the high-risk AI system
- Test datasets used during development
- Validation datasets used to assess system performance
- Technical documentation required under Art.11 and Annex IV
- Post-market monitoring plans under Art.72
- Logs automatically generated by the system under Art.12
The word full in Art.64(1) is not qualified by proportionality at this stage. Proportionality governs source code access (Art.64(2)) and the overall investigation framework (Art.58), but access to datasets, documentation, and logs is granted as a right, not a conditional power.
Why datasets are the central enforcement target: Conformity with Chapter III, Section 2 requirements — data governance (Art.10), technical robustness (Art.15), human oversight (Art.14) — cannot be verified from product-level testing alone. An NCA assessing whether a high-risk AI system for recruitment screening (Annex III, category 4) meets Art.10 data quality requirements must be able to inspect:
- The composition and representativeness of training data
- Pre-processing steps applied to address bias
- Whether protected-characteristic proxies were excluded
- The validation methodology used to test model fairness
None of this is visible from the deployed system's external behaviour. Art.64(1) access is what makes Art.10 compliance assessable.
Hosted infrastructure implications: For AI systems hosted on cloud platforms, Art.64(1) access extends to the provider's technical infrastructure to the extent that infrastructure holds regulated data. EU-native hosting arrangements simplify Art.64 compliance by maintaining training data, logs, and documentation within a single legal jurisdiction — eliminating the cross-border access complications that arise when data is distributed across non-EU infrastructure (see CLOUD Act section below).
Art.64(2): Source Code Access — Proportionality and Conditions
Art.64(2) grants market surveillance authorities access to source code of the high-risk AI system, subject to a proportionality constraint: source code access is available where necessary to assess conformity and where a reasoned request has been made.
The "where necessary" condition distinguishes Art.64(2) from Art.64(1). Source code access is not automatic in all enforcement proceedings. It requires the NCA to assess whether:
- The conformity question at issue cannot be answered by dataset inspection, documentation review, or output testing alone
- Source code inspection would materially advance the assessment
- The investigative burden is proportionate to the compliance risk identified
When source code access is necessary:
| Conformity question | Dataset/doc sufficient? | Source code needed? |
|---|---|---|
| Data preprocessing pipeline meets Art.10(3) | Usually yes | Rarely |
| Human oversight override mechanism (Art.14) | Documentation may suffice | If implementation contested |
| Robustness against adversarial input (Art.15) | Test results + architecture docs often sufficient | If test methodology disputed |
| Logging completeness (Art.12) | Log inspection + architecture docs | If logging gaps suspected |
| Embedded bias in inference pipeline | Training data + validation | Yes — pipeline logic required |
Procedural requirements: The "reasoned request" requirement means the NCA must state in writing why source code access is necessary for the specific conformity question under investigation. This procedural requirement protects providers from speculative or overbroad source code demands while preserving NCA authority in genuinely complex cases.
Trade secrets: Source code is typically the most commercially sensitive component of an AI system. Art.64(2) must be read in conjunction with Art.64(5) (confidentiality) and Art.70 (trade secrets protection) — accessed source code carries the strongest confidentiality obligations among Art.64 material.
Art.64(3): Provider Cooperation Obligations
Art.64(3) establishes the corresponding obligation on the provider side: providers of high-risk AI systems and their authorised representatives under Art.22 shall cooperate with market surveillance authorities and take all necessary measures to ensure that Art.64 access is possible.
What cooperation requires:
- Maintaining datasets and documentation in accessible, retrievable formats
- Designating technical personnel able to explain system architecture to investigators
- Providing access to data storage infrastructure (including cloud environments)
- Not destroying, altering, or concealing Art.64-material after investigation notification
- Translating documentation into Member State language(s) if reasonably requested
Refusal consequences: Failure to cooperate with Art.64 access constitutes a standalone violation of the Regulation, separate from any underlying conformity issue. This means an NCA discovering a provider refused data access can initiate enforcement proceedings on two grounds simultaneously: suspected conformity failure and refusal to cooperate.
Notification triggers: Providers should treat the following as signals to initiate Art.64 compliance preparation:
- Formal NCA investigation notification
- Informal NCA inquiry about a specific system
- Serious incident notification submitted by a deployer (Art.73)
- AI system registration triggering NCA review
Third-country providers: Providers established outside the EU are subject to Art.64 cooperation obligations through their EU-authorised representative (Art.22). The authorised representative must have the authority to ensure cooperation, which may require contractual mechanisms with the non-EU provider giving the representative enforceable access rights to production data and systems.
Art.64(4): Deployer Obligations to Facilitate Access
Art.64(4) extends cooperation obligations to deployers of high-risk AI systems. Deployers using systems covered by Art.64 must:
- Provide market surveillance authorities with access to the AI system in the deployed environment
- Cooperate with NCA investigations relating to the system's conformity
- Share deployment-environment logs and monitoring data generated under Art.12 and Art.72
The deployer-facing Art.64(4) obligation reflects an enforcement reality: many conformity questions only become visible in the deployed environment. A high-risk AI system for medical diagnosis (Annex III, category 5) may produce anomalous outputs in a specific clinical workflow that was not represented in the provider's validation data. NCA investigation of that anomaly requires access to the deployer's production environment, real-world logs, and deployment-specific configuration — data the provider does not hold.
Deployer practical implications:
- Post-market monitoring data collected under Art.72 must be retained in accessible form
- Deployment contracts should specify data retention periods consistent with potential NCA investigation timelines
- Third-party deployment platforms must allow NCA investigation access — this should be verified in service agreements before deployment
Art.64(5): Confidentiality and Trade Secret Protections
Art.64(5) places confidentiality obligations on market surveillance authorities with respect to information obtained under Art.64. Accessed information may only be used for enforcement purposes and must be protected in accordance with Union law on confidentiality of proceedings.
Three tiers of accessed information:
| Information type | Sensitivity level | Protections |
|---|---|---|
| Training data composition (structure, sources) | Medium | Confidential to investigation |
| Validation/test results | Medium | Confidential to investigation |
| Technical documentation (architecture, design choices) | High | Confidential + trade secret |
| Source code | Very high | Strongest protection — use/disclosure strictly limited |
Art.70 interaction: Art.70 of the Regulation establishes a general framework for protection of confidential information and trade secrets in AI Act enforcement proceedings. Art.64(5) is the Art.64-specific instantiation of those protections: information obtained specifically through data/documentation access carries the Art.70 safeguards without requiring the provider to separately assert them.
Provider rights: Providers may designate specific datasets or code components as trade secrets at the time of access, triggering enhanced confidentiality protections and a higher threshold for any subsequent disclosure. This designation should be made in writing at the time the NCA first accesses the material — retroactive designation creates procedural complications.
Art.64 vs Art.62: Data Access for High-Risk AI vs GPAI Models
Art.64 and Art.62 establish parallel data access regimes for different AI system categories:
| Dimension | Art.64 | Art.62 |
|---|---|---|
| Scope | High-risk AI systems (Chapter III, Annex III) | GPAI models (Chapter V) |
| Access authority | NCAs (national market surveillance authorities) | AI Office (Commission service) |
| Access triggers | NCA investigation of system conformity | AI Office investigation of model obligations |
| Source code access | Conditional — "where necessary" + reasoned request | Art.62 investigation powers |
| Cooperation obligation | Provider + deployer (Art.64(3)-(4)) | GPAI model provider (Art.62(1)) |
| Confidentiality | Art.64(5) + Art.70 | Art.70 |
Dual-scope scenario: A GPAI model integrated into a high-risk AI system for credit scoring (Annex III, category 5b) may trigger both Art.64 (NCA access to credit-scoring system data) and Art.62 (AI Office access to underlying GPAI model). In this scenario:
- NCA investigates the deployed high-risk application
- AI Office investigates the GPAI model's base obligations
- Art.59 AI Board coordination mechanism prevents investigative duplication
- Provider must prepare for simultaneous cooperation obligations to two different regulators
EU-native infrastructure that maintains data within a single legal jurisdiction simplifies this dual-regulator scenario by ensuring both authorities can access data under EU legal framework — avoiding the jurisdictional complications of data distributed across US-headquartered cloud providers.
CLOUD Act Implications for Art.64 Data Access
The CLOUD Act (US Clarifying Lawful Overseas Use of Data Act) grants US government authorities the ability to compel US-incorporated cloud providers to produce data held overseas, including within the EU. This creates a structural tension with Art.64 enforcement integrity:
The dual-compellability risk:
| Data location | EU jurisdiction | US jurisdiction |
|---|---|---|
| Training data on EU-incorporated cloud | Art.64 NCA access ✓ | CLOUD Act risk: NO (EU-incorporated provider not subject) |
| Training data on US-incorporated cloud (EU datacenter) | Art.64 NCA access ✓ | CLOUD Act risk: YES |
| Source code on US-incorporated repository | Art.64 NCA access ✓ | CLOUD Act risk: YES |
| Logs on EU-native infrastructure | Art.64 NCA access ✓ | CLOUD Act risk: NO |
Enforcement credibility: A high-risk AI system whose training data is accessible to US government agencies under CLOUD Act compulsion — potentially before, during, or after an EU NCA investigation — creates uncertainty about data integrity. Evidence obtained by an NCA under Art.64 could theoretically be affected by prior US access. EU-incorporated infrastructure eliminates this concern by placing data outside CLOUD Act reach entirely.
Art.64 compliance recommendation: Providers targeting EU markets with high-risk AI systems should assess whether their data supply chain — training data storage, model artefact hosting, log retention — is exposed to CLOUD Act compellability. Where it is, migration to EU-incorporated infrastructure eliminates the risk without requiring any change to the AI system itself.
Python Implementation: DataAccessRequest and ProviderCooperationRecord
from dataclasses import dataclass, field
from datetime import date, datetime
from enum import Enum
from typing import Optional
class AccessType(str, Enum):
TRAINING_DATA = "training_data"
TEST_DATA = "test_data"
VALIDATION_DATA = "validation_data"
TECHNICAL_DOCUMENTATION = "technical_documentation"
LOGS = "logs"
SOURCE_CODE = "source_code" # Art.64(2) — conditional
POST_MARKET_MONITORING = "post_market_monitoring"
class CooperationStatus(str, Enum):
PENDING = "pending"
IN_PROGRESS = "in_progress"
FULFILLED = "fulfilled"
PARTIALLY_FULFILLED = "partially_fulfilled"
REFUSED = "refused" # triggers enforcement escalation
@dataclass
class DataAccessRequest:
"""Represents an NCA Art.64 data/documentation access request."""
request_id: str
requesting_authority: str # NCA name or "AI Office" (Art.62)
system_id: str # High-risk AI system identifier
request_date: date
access_types: list[AccessType]
legal_basis: str = "Art.64 Regulation (EU) 2024/1689"
source_code_reason: Optional[str] = None # Required if SOURCE_CODE in access_types
confidentiality_designated_items: list[str] = field(default_factory=list)
trade_secret_items: list[str] = field(default_factory=list)
def requires_reasoned_request(self) -> bool:
"""Art.64(2): source code access requires explicit reasoning."""
return AccessType.SOURCE_CODE in self.access_types
def validate(self) -> list[str]:
issues = []
if AccessType.SOURCE_CODE in self.access_types and not self.source_code_reason:
issues.append(
"Art.64(2): source_code_reason required when requesting source code access"
)
return issues
def confidentiality_designation_summary(self) -> str:
if not self.confidentiality_designated_items and not self.trade_secret_items:
return "No items designated confidential or trade secret."
lines = ["Art.64(5) + Art.70 designations:"]
for item in self.confidentiality_designated_items:
lines.append(f" [CONFIDENTIAL] {item}")
for item in self.trade_secret_items:
lines.append(f" [TRADE SECRET] {item}")
return "\n".join(lines)
@dataclass
class ProviderCooperationRecord:
"""Tracks provider compliance with Art.64(3) cooperation obligation."""
record_id: str
request: DataAccessRequest
provider_entity: str
authorised_representative: Optional[str] = None # Art.22 representative
response_date: Optional[date] = None
status: CooperationStatus = CooperationStatus.PENDING
access_provided: list[AccessType] = field(default_factory=list)
access_refused: list[AccessType] = field(default_factory=list)
refusal_reasons: dict[AccessType, str] = field(default_factory=dict)
notes: str = ""
def is_compliant(self) -> bool:
"""Full compliance: all requested access types fulfilled without refusal."""
return (
self.status == CooperationStatus.FULFILLED
and len(self.access_refused) == 0
)
def enforcement_risk(self) -> str:
if self.status == CooperationStatus.REFUSED:
return "HIGH — refusal of Art.64 access is standalone violation"
if self.access_refused:
return f"MEDIUM — partial refusal: {[a.value for a in self.access_refused]}"
if self.status == CooperationStatus.PENDING and self.request.request_date:
days_open = (date.today() - self.request.request_date).days
if days_open > 30:
return f"MEDIUM — request open {days_open} days without response"
return "LOW"
def cooperation_report(self) -> str:
lines = [
f"Art.64 Cooperation Record — {self.record_id}",
f"Authority: {self.request.requesting_authority}",
f"System: {self.request.system_id}",
f"Request date: {self.request.request_date}",
f"Status: {self.status.value}",
f"Enforcement risk: {self.enforcement_risk()}",
]
if self.access_provided:
lines.append(f"Access provided: {[a.value for a in self.access_provided]}")
if self.access_refused:
lines.append(f"Access refused: {[a.value for a in self.access_refused]}")
return "\n".join(lines)
Art.64 Compliance Readiness Checklist
| # | Item | Who | Timing |
|---|---|---|---|
| 1 | Inventory all training, test, and validation datasets by high-risk system — confirm retrievability within 5 business days | Provider | Before market placement |
| 2 | Confirm technical documentation (Art.11 + Annex IV) is complete and retrievable in the deployment Member State's language | Provider | Before market placement |
| 3 | Designate a technical point of contact for NCA Art.64 access requests — include contact in conformity declaration | Provider | Before market placement |
| 4 | Assess source code CLOUD Act exposure — if source code is on US-incorporated repositories, evaluate EU-native migration | Provider | Before market placement |
| 5 | Confirm Art.22 authorised representative has contractual authority to ensure provider cooperation with NCA data access | Provider (third-country) | Before EU market access |
| 6 | Verify deployment service agreements include provisions for NCA investigation access to logs and deployment-environment data | Deployer | Before deployment |
| 7 | Implement trade secret designation process: prepare written designation templates for source code, proprietary datasets | Provider | Before first potential investigation |
| 8 | Confirm post-market monitoring data (Art.72) is retained for minimum period consistent with NCA investigation timelines | Deployer | Ongoing |
| 9 | Review NCA investigation notification procedures — confirm internal escalation path from notification to Art.64 cooperation team | Provider + deployer | Before market placement |
| 10 | Conduct annual Art.64 readiness drill: simulate NCA data request, time retrieval of all Art.64(1) materials, identify gaps | Provider | Annually |
Series Context: Chapter VI Governance Framework
| Article | Coverage | Post |
|---|---|---|
| Art.57 | National Competent Authorities — designation, tasks, independence | Art.57 guide |
| Art.58 | NCA enforcement powers — investigation, access, corrective measures | Art.58 guide |
| Art.59 | AI Board — composition, independence, NCA coordination | Art.59 guide |
| Art.60 | EU AI database — public registry, EUID governance, Commission management | Art.60 guide |
| Art.61 | Scientific Panel — independent experts, model evaluation, AI Office advisory | Art.61 guide |
| Art.62 | AI Office enforcement powers — corrective measures, market withdrawal, emergency action | Art.62 guide |
| Art.63 | Advisory Forum — multi-stakeholder consultation, composition, tasks, CoP input | Art.63 guide |
| Art.64 | Access to data and documentation — market surveillance authority enforcement powers | This guide |
| Art.65 | Reporting of serious incidents and malfunctioning of high-risk AI systems | Art.65 guide |
EU AI Act Art.64 analysis based on Regulation (EU) 2024/1689 as published in the Official Journal of the European Union. Applicable from 2 August 2025 per Art.113(3). NCA-specific procedures for exercising Art.64 access powers will be established through national implementing measures; Member States may impose additional procedural requirements consistent with the Regulation. This guide reflects the text of the Regulation as enacted.