EU AI Act Art.64-70: The EU AI Office & AI Governance Structure — Developer Guide (2026)
EU AI Act Articles 64-70 define who enforces the AI Act, how enforcement works, and what rights developers have when regulators come knocking. These governance articles are not abstract constitutional provisions — they directly determine which authority can request your technical documentation, which expert panel can classify your model as systemic risk, and what confidentiality protections apply to your source code and training data when submitted to regulators.
This guide covers the full governance chapter: the AI Office (Art.64-65), the Scientific Panel of Independent Experts (Art.66), the Advisory Forum (Art.67), National Competent Authorities (Art.68), Market Surveillance Authorities (Art.69), and Confidentiality (Art.70). For every article, we cover the developer-relevant mechanics, compliance obligations, and infrastructure jurisdiction implications under the CLOUD Act.
Art.64-70 became applicable on 2 August 2025 as part of the EU AI Act governance framework (Regulation (EU) 2024/1689). The AI Office was pre-established by Commission Decision C(2024) 2025 of 24 January 2024 — it has been operational since before the Act formally took effect.
Why Governance Articles Matter for Developers
Most developer-facing EU AI Act guides focus on obligations (Art.9, Art.10, Art.12, Art.26) while skipping governance. This is a mistake. Art.64-70 determines:
- Who can investigate you — Art.65 AI Office tasks vs. Art.69 national MSA powers
- What evidence they can compel — training data, source code, model weights, audit records
- Who classifies your model as systemic risk — the Art.66 Scientific Panel, not just the Commission
- What your rights are during investigation — Art.70 confidentiality for trade secrets
- Which country's authority leads — your development jurisdiction vs. your deployment market
- How CLOUD Act interacts — US-hosted investigation records are dual-compellable
Understanding governance is understanding your adversarial environment. This is not hypothetical: the AI Office has already initiated qualification procedures for GPAI models and issued formal investigative requests to GPAI providers.
Art.64-70 at a Glance
| Article | Subject | Developer Relevance |
|---|---|---|
| Art.64 | EU AI Office establishment | Primary regulator for GPAI models — your main counterpart if you build GPAI products |
| Art.65 | AI Office tasks | Sets the scope of investigation + monitoring powers |
| Art.66 | Scientific Panel of Independent Experts | Can classify your model as systemic risk; can request your technical data |
| Art.67 | Advisory Forum | Where industry shapes CoP and implementation — engagement opportunity |
| Art.68 | National Competent Authorities (NCAs) | Your country-specific regulator for all non-GPAI AI systems |
| Art.69 | Market Surveillance Authorities (MSAs) | Enforcement for high-risk AI systems — inspection powers, market withdrawals |
| Art.70 | Confidentiality | Trade secret protection when submitting documentation |
Art.64: The EU AI Office
What Is the AI Office?
Art.64 establishes the EU AI Office within the European Commission as the primary EU-level body responsible for GPAI model oversight and cross-border AI Act enforcement coordination. The AI Office:
- Is embedded within the Commission's DG CNECT structure
- Has Union-wide jurisdiction for GPAI model matters (not limited to one member state)
- Operates with functional independence — decisions on GPAI qualification and investigation are not subject to day-to-day political instructions
- Publishes an annual report on its activities
Why "Within the Commission" Matters
The AI Office's Commission embedding has two practical implications:
1. GPAI regulation is centralized. Unlike high-risk AI systems (which are regulated by 27 member state MSAs), GPAI model oversight goes through one body. If you are a GPAI provider deploying across the EU, you have one primary regulatory counterpart: the AI Office in Brussels.
2. Commission investigative powers apply. The AI Office can invoke Commission powers under Art.65(4) including access to premises, documents, and personnel. This is a stronger investigative toolkit than most national MSAs have for domestic enforcement.
AI Office for Non-GPAI Developers
If you build high-risk AI systems (not GPAI models), your primary regulators are the national MSAs (Art.69), not the AI Office directly. The AI Office coordinates nationally but does not conduct day-to-day high-risk AI enforcement. However, if your high-risk AI system uses a GPAI model as a component, the Art.65 monitoring of that upstream GPAI provider is directly relevant to your supply chain compliance.
Art.65: Tasks of the AI Office
Art.65 defines what the AI Office actually does. For GPAI providers, these are the specific AI Office activities that create compliance obligations:
Core GPAI Oversight Tasks (Art.65(1))
Task 1: GPAI Model Monitoring The AI Office continuously monitors GPAI models for compliance with Chapter V (Art.51-56). This includes:
- Reviewing technical documentation (Annex XI — model architecture, training data, capabilities, benchmarks)
- Assessing whether the 10^25 FLOP systemic risk threshold applies (Art.51(1)(a))
- Monitoring known GPAI providers for capability updates that may cross the systemic risk threshold
Task 2: Code of Practice Facilitation (Art.56 Link) The AI Office facilitates the development and implementation of Codes of Practice (Art.56). Specifically:
- Convenes stakeholder working groups (GPAI providers, downstream developers, civil society)
- Sets the agenda and timeline for CoP drafting
- Assesses CoP adequacy (Art.56(4)) → adequacy finding creates conformity presumption
- Monitors CoP adherence by signatory providers
Task 3: Adversarial Testing Coordination (Art.53(1)(a) Link) For GPAI models with systemic risk, the AI Office coordinates adversarial testing. It can:
- Select qualified independent experts to conduct testing
- Define testing scope (CBRN/cyber/manipulation/autonomy domains)
- Review and publish testing methodologies (subject to Art.70 confidentiality)
- Use testing results as input for systemic risk designation under Art.51(2)
Task 4: Guidance and Recommendations The AI Office issues:
- Interpretative guidelines on Art.51 systemic risk threshold
- Technical guidance on Annex XI documentation requirements
- Best-practice recommendations on Art.53 adversarial testing methodologies
- Implementation guidance for CoP development under Art.56
Task 5: Annual Reporting The AI Office publishes an annual report covering:
- State of GPAI model compliance
- CoP development status and adequacy findings
- Systemic risk classification decisions
- Investigative activities and outcomes
Investigative Powers (Art.65(4))
This is the provision that matters most in an adversarial context. The AI Office can:
- Request technical documentation from GPAI providers without needing a prior complaint
- Access premises and personnel (on-site inspections) with advance notice
- Issue formal decisions requiring information provision within specified deadlines
- Impose penalties for non-cooperation (Art.101 fines: up to €35M/7% for systemic risk non-compliance)
- Order interim measures when systemic risk is imminent
For developers: if the AI Office issues a formal information request under Art.65(4), you have limited time to respond (typically 15-30 days). Having your Annex XI documentation in order before a request arrives is your only viable defense — scrambling to compile documentation after receiving an Art.65(4) request is a compliance failure mode.
AI Office Powers vs. National MSA Powers
| Power | AI Office (Art.65) | National MSA (Art.69) |
|---|---|---|
| Jurisdiction | GPAI models (EU-wide) | High-risk AI in territory |
| Investigation trigger | Own initiative or complaint | Complaint or market surveillance |
| Document access | Yes (Art.65(4)) | Yes (Art.69(3)) |
| On-site inspection | Yes | Yes |
| Interim measures | Yes (imminent systemic risk) | Yes (serious risk) |
| Penalty authority | Art.101 (via Commission) | National fine proceedings |
| Cross-border coordination | Direct | Through AI Board |
Art.66: Scientific Panel of Independent Experts
What the Scientific Panel Does
The Scientific Panel of Independent Experts is the AI Office's technical advisory body with a specific and critical function: it provides independent expert opinions on whether a GPAI model qualifies as posing systemic risk under Art.51(2).
Art.66 creates a panel with:
- Minimum 2 experts per EU member state (27 states minimum)
- Strict independence requirements — no financial or organizational ties to regulated entities
- Expertise covering: ML architecture, large-scale model evaluation, adversarial robustness, CBRN risk assessment, cybersecurity
- Fixed terms to prevent political rotation
The Art.51(2) Designation Pathway
The Scientific Panel's most consequential power is its role in Art.51(2) systemic risk designation:
Art.51(2) Designation Process:
1. GPAI provider triggers Art.51(1)(a) threshold: ≥10^25 FLOPs training compute
OR
AI Office identifies potential systemic risk through Art.65 monitoring
2. Scientific Panel (Art.66) conducts independent assessment:
- Requests Annex XI documentation from provider
- Reviews capability evaluations (benchmarks + adversarial tests)
- Assesses capability criteria: CBRN, cybersecurity, manipulation, autonomous replication
- Provides written opinion to Commission
3. Commission issues Art.51(2) designation decision:
- Based on Scientific Panel opinion
- Provider gets 15-day advance notice + right to respond
- Decision published in Official Journal
4. Provider now subject to Art.52-56 Chapter V obligations
Scientific Panel Data Access
For GPAI providers: The Scientific Panel can directly request:
- Full Annex XI technical documentation
- Capability evaluation results (including internal benchmarks not publicly disclosed)
- Adversarial testing reports
- Model architecture details (confidential under Art.70)
- Training data summaries
These requests bypass normal national authority channels — the Scientific Panel is an EU-level body that can request data directly from GPAI providers across member states.
The Voluntary Notification Option
Art.66 includes a voluntary notification mechanism: GPAI providers who believe their model may be approaching or exceeding the systemic risk threshold can voluntarily notify the AI Office before reaching the threshold. Benefits:
- Pre-designation dialogue with the Scientific Panel
- Ability to participate in CoP development from the start (Art.56(5) voluntary early adoption)
- Regulatory good-faith credit in potential enforcement proceedings
- Advance planning for Art.52-55 compliance obligations
For developers building large-scale foundation models: voluntary notification is strategically rational if you believe your next training run will cross 10^25 FLOPs.
Art.67: Advisory Forum
Structure and Composition
The Advisory Forum provides stakeholder input into EU AI Act implementation, including:
- Rotating membership: 30 representatives from industry (AI providers and deployers), SMEs, civil society organizations, academic institutions, and European standards bodies (CEN-CENELEC)
- 2-year terms with staggered renewals
- Balanced representation: proportional mix of providers, deployers, and affected-sector groups
- Geographic balance across member states
Advisory Forum Tasks
- Implementation guidance: Provides non-binding opinions on EU AI Act interpretation for the Commission and AI Office
- Code of Practice input: Contributes to CoP content development under Art.56 — the Advisory Forum is one of the stakeholder channels for industry participation in CoP drafting
- Annual report input: Provides observations that inform the AI Office annual report (Art.65)
- Technical standards liaison: Coordinates with CEN-CENELEC and ISO/IEC JTC 1/SC 42 on AI standards referenced in Art.40
Developer Engagement Opportunity
The Advisory Forum is the formal channel for industry to shape AI regulation. For developers building GPAI-related products or high-risk AI systems, the Advisory Forum is worth monitoring because:
- CoP content (Art.56) is influenced by Advisory Forum input — your implementation costs depend on what gets into the CoP
- Art.40 harmonized standards that create conformity presumptions are influenced by standards body members on the Forum
- Interpretation guidance that emerges from Forum proceedings can clarify compliance pathways before enforcement begins
Practical action: Follow EU AI Office publications on Advisory Forum proceedings. Register for public consultation periods on CoP development. Industry associations (e.g., CCIA, DIGITALEUROPE) participate in the Forum and often publish consultation responses.
Art.68: National Competent Authorities (NCAs) and Single Points of Contact
The NCA Designation Requirement
Art.68 requires each EU member state to designate:
- One or more National Competent Authorities (NCAs) with overall responsibility for EU AI Act implementation at national level
- A single point of contact for communication with the Commission and AI Office
- Sufficient technical expertise, staff and resources — the NCA must be adequately resourced
Current NCA Landscape (as of 2026)
| Member State | Designated NCA | Notes |
|---|---|---|
| Germany 🇩🇪 | BNetzA / BMDV coordination | Multiple sector-specific bodies |
| France 🇫🇷 | ANSSI + Inria coordination | Cyber + technical expertise |
| Netherlands 🇳🇱 | ACM (Autoriteit Consument & Markt) | Competition authority expanded |
| Sweden 🇸🇪 | IMY (Datainspektionen) | Data protection overlap |
| Spain 🇪🇸 | AESIA (Agencia Española de Supervisión de IA) | Dedicated AI authority |
| Italy 🇮🇹 | AgID coordination | Government digital authority |
| Poland 🇵🇱 | UODO + ministerial coordination | DP authority + ministry |
Spain's AESIA is notable as the only EU member state to have established a dedicated AI authority before the August 2025 application date. Most member states have designated existing authorities (data protection authorities, competition authorities, sector-specific regulators) as NCAs.
Implications for Developers
If you operate in one member state: your NCA is your primary national regulatory contact for:
- Receiving and forwarding EU AI Act guidance from the Commission
- Coordinating with the AI Office on national enforcement matters
- Administering any national AI-specific legislation layered on top of the EU Act
If you operate cross-border: the NCA of your establishment location (Art.2 jurisdiction rules) is typically your lead authority for national coordination purposes, but the AI Office handles all GPAI matters centrally regardless of establishment location.
NCA as coordination hub: NCAs coordinate between the AI Office (for GPAI matters) and national MSAs (for high-risk AI matters). They are the interface, not the primary investigator for either regime.
Art.69: Market Surveillance Authorities (MSAs)
MSA Powers — The Enforcement Arm for High-Risk AI
Art.69 establishes the enforcement framework for high-risk AI systems (Annex III). MSAs are the national authorities responsible for market surveillance — the "regulatory police" that conduct inspections, request documentation, and impose corrective measures.
Core MSA powers under Art.69:
Power 1: Document and Data Access MSAs can formally request:
- Full Annex IV technical documentation
- Access to training data (Art.10 datasets)
- Source code on a reasoned request (Art.69(3)) — this is rare but legally available
- Post-market monitoring data (Art.30 PMM plans and records)
- Quality management system documentation (Art.17 QMS)
- Conformity assessment records (Art.31 + Annex VI/VII)
- All logs generated under Art.12
Power 2: On-Site Inspection MSAs can conduct inspections with advance notice:
- Access to development facilities
- Access to testing environments
- Interview with technical personnel
- Review of real-time system operation
Power 3: Serious Risk Interim Measures If an MSA identifies a serious risk, it can:
- Order immediate operational suspension
- Order product recall from market
- Prohibit making the system available to new users
- Impose access restrictions while investigation proceeds
Power 4: Cross-Border Escalation MSAs participate in the AI Board cross-border coordination mechanism. If a high-risk AI system operates in multiple member states:
- The MSA in the establishment country leads
- Other MSAs can request investigation
- The AI Office coordinates when GPAI components are involved
The "Source Code" Request Provision
Art.69(3) is the provision that gets developers' attention: MSAs can request source code access on a reasoned request. This provision has specific constraints:
- Reasoned request required — the MSA must document why source code access is necessary (not routine)
- Proportionality constraint — less intrusive means (documentation, logs) must be insufficient
- Confidentiality applies — Art.70 protections apply to source code provided to MSAs
- No public disclosure — source code provided to MSAs cannot be shared beyond the investigation
In practice, source code requests are reserved for cases where the MSA suspects conformity assessment fraud or where documentation alone cannot establish compliance. For standard market surveillance, documentation (Annex IV technical docs + Art.12 logs + Art.30 PMM records) is the normal evidentiary basis.
MSA Penalty Authority
MSAs administer national fine proceedings under Art.99-101 framework:
| Violation Category | Maximum Fine |
|---|---|
| Art.5 prohibited practices | €35M or 7% of global turnover |
| High-risk AI Art.9-15 violations | €15M or 3% of global turnover |
| Incorrect information to authorities | €7.5M or 1.5% of global turnover |
| GPAI / Chapter V violations | €15M or 3% (Art.101) |
The actual fine is imposed by national MSA proceedings, not directly by the AI Office (except for GPAI matters where the Commission can act directly under Art.65(4)).
Art.70: Confidentiality — Your Rights When Submitting Technical Docs
The Confidentiality Framework
Art.70 establishes that all parties handling EU AI Act information — the Commission, AI Office, Scientific Panel, NCAs, MSAs — must maintain confidentiality for:
- Trade secrets and commercially sensitive information
- Technical documentation containing proprietary model architecture
- Training dataset composition details
- Source code provided under Art.69(3)
- Internal test results and benchmark data
- Business strategies contained in QMS documentation
What Art.70 Protects
Protected under Art.70:
- Annex XI/XII technical documentation submitted by GPAI providers
- Annex IV technical documentation submitted by high-risk AI providers
- Source code provided to MSAs under Art.69(3)
- Internal adversarial testing reports (Art.53(1)(a))
- Model weight configurations submitted for systemic risk assessment
- Business-sensitive information in QMS documentation (Art.17)
NOT fully protected under Art.70:
- Information that must be publicly accessible under Art.32 (EU AI Database registration fields — limited set)
- Essential safety information that MSAs must share under Art.21(4) (minimum necessary for public safety)
- General compliance status determinations (whether a system is compliant or non-compliant)
- Information the company has already publicly disclosed
Art.70 in Practice: Confidential Designation
When submitting technical documentation to any EU AI Act authority, you should:
- Designate confidential sections explicitly: Mark trade secrets with "Confidential under Art.70 EU AI Act (Regulation 2024/1689)"
- Separate public and private elements: Create a two-part documentation structure — a public summary (for Art.32 EU Database) and a confidential annex (for authority review)
- Document your commercial sensitivity rationale: A brief memo explaining WHY each section is commercially sensitive strengthens Art.70 protection if challenged
- Request written confidentiality confirmation: When submitting to an MSA or the AI Office, request explicit acknowledgment that Art.70 applies
Critical Limitation: Art.70 Does NOT Address CLOUD Act
Art.70 protects against EU authority disclosure — it does not protect against US law compulsion. If your Art.70-protected documentation is stored on US-controlled infrastructure (AWS/Azure/GCP), the US CLOUD Act can compel production to US federal law enforcement. Art.70 protections exist in EU law; the CLOUD Act operates under US law and is not bound by EU confidentiality provisions.
This creates a specific risk scenario:
- You submit Annex XI documentation to the AI Office (protected under Art.70 in EU proceedings)
- If that documentation is stored on US servers, a US grand jury subpoena can compel the same documents independently of EU proceedings
- The Art.70 protection from EU disclosure does not prevent parallel US compulsion
EU-native infrastructure eliminates this exposure. If your technical documentation, source code, and investigation correspondence are hosted on EU-sovereignty infrastructure (no US parent company with CLOUD Act reach), Art.70 protection is the only applicable access regime.
Developer Impact Matrix: Art.64-70 by Role
| Role | Primary Contact | Key Provisions | Documentation Required |
|---|---|---|---|
| GPAI Provider (non-systemic) | AI Office (Art.64) | Art.64-65, Art.70 | Annex XI (voluntary baseline) |
| GPAI Provider (systemic risk) | AI Office + Scientific Panel | Art.64-66, Art.70 | Annex XI+XII (mandatory, Art.52-55) |
| High-Risk AI Provider | National MSA (Art.69) | Art.68-70 | Annex IV, Art.12 logs, Art.30 PMM |
| High-Risk AI Deployer | National MSA (Art.69) | Art.68-70 | Art.26 monitoring records |
| General-Purpose AI Deployer | Depends on use case | Art.68, Art.70 | Context-dependent (Art.6(3) self-classification) |
| Foundation Model Fine-Tuner | AI Office or MSA | Art.64-65, Art.68-69 | Depends on output classification |
CLOUD Act × Art.64-70: The Dual Jurisdiction Problem
Art.64-70 creates a detailed governance framework for EU-side access to AI documentation. But it operates in parallel with US CLOUD Act obligations for providers using US infrastructure. The intersection creates dual-compellability risks:
Record Type → Dual Compellability Analysis
| Record Type | AI Act Authority Access | CLOUD Act Exposure | Risk Level |
|---|---|---|---|
| Annex XI GPAI technical docs | AI Office (Art.65) + Scientific Panel (Art.66) | YES — if on US infra | HIGH |
| Source code (Art.69(3) production) | National MSA investigation | YES — if on US infra | HIGH |
| Adversarial testing reports (Art.53) | AI Office + Scientific Panel | YES — if on US infra | HIGH |
| Correspondence with AI Office | Art.70 protected (EU side) | YES — if on US email | MEDIUM |
| Art.12 audit logs (high-risk AI) | National MSA (Art.69) | YES — if on US infra | MEDIUM |
| Art.30 PMM records | National MSA (Art.69) | YES — if on US infra | MEDIUM |
| QMS documentation (Art.17) | National MSA (Art.69) | YES — if on US infra | MEDIUM |
The Specific GPAI Problem
GPAI providers with systemic risk are required to maintain extensive technical documentation (Annex XI/XII) that includes model weights references, capability evaluations, adversarial testing results, and infrastructure details. This documentation:
- Is submitted to the AI Office (Art.65) under Art.70 confidentiality
- Is reviewed by the Scientific Panel (Art.66) under Art.70 confidentiality
- If stored on AWS/Azure/GCP, is CLOUD Act accessible by US authorities independently
A US authority investigating a GPAI provider's model for national security reasons could compel the exact same Annex XI documentation that the EU AI Office holds under Art.70 protection — through the CLOUD Act rather than EU channels.
EU-Native Infrastructure as Single-Regime Defense
EU-native PaaS infrastructure eliminates CLOUD Act exposure:
- Documentation storage: Technical docs, model cards, capability evaluations → EU-only server
- Investigation correspondence: AI Office/Scientific Panel submissions → EU-hosted email + document systems
- Source code repositories: CI/CD pipelines → EU-jurisdiction git infrastructure
- Audit logs: Art.12 event logs → EU storage with no US-law reachability
With EU-native hosting, Art.70 is the only applicable access regime for your submitted documentation. There is no parallel CLOUD Act track because there is no US infrastructure to compel.
Python Implementation
AIOfficeInvestigationRecord
from dataclasses import dataclass, field
from datetime import datetime, timedelta
from enum import Enum
from typing import Optional
class InvestigationStatus(Enum):
RECEIVED = "received"
UNDER_REVIEW = "under_review"
RESPONDED = "responded"
ESCALATED = "escalated"
CLOSED = "closed"
class RequestingAuthority(Enum):
AI_OFFICE = "ai_office"
SCIENTIFIC_PANEL = "scientific_panel"
NATIONAL_MSA = "national_msa"
NCA = "national_competent_authority"
ADVISORY_FORUM = "advisory_forum" # rare, for consultation purposes
@dataclass
class AIOfficeInvestigationRecord:
"""
Tracks regulatory investigation requests under EU AI Act Art.64-70.
Art.70 confidentiality applies to all submitted documentation.
"""
record_id: str
received_date: datetime
requesting_authority: RequestingAuthority
authority_reference: str # official case/reference number
request_type: str # "annex_xi_docs", "source_code_art69_3", "adversarial_test_results", etc.
response_deadline_days: int # typically 15-30 days
confidentiality_designation: bool = True # Art.70 applies by default
art70_designation_memo: str = ""
submitted_documents: list[str] = field(default_factory=list)
status: InvestigationStatus = InvestigationStatus.RECEIVED
legal_counsel_notified: bool = False
cloud_act_risk_assessed: bool = False
@property
def deadline(self) -> datetime:
return self.received_date + timedelta(days=self.response_deadline_days)
@property
def days_remaining(self) -> int:
return (self.deadline - datetime.utcnow()).days
@property
def is_urgent(self) -> bool:
return self.days_remaining <= 5
@property
def requires_source_code(self) -> bool:
return "source_code" in self.request_type
def art70_submission_header(self) -> str:
"""Generate confidentiality header for document submissions."""
return (
f"CONFIDENTIAL — Art.70 EU AI Act (Regulation 2024/1689)\n"
f"Reference: {self.authority_reference}\n"
f"Submitted to: {self.requesting_authority.value.replace('_', ' ').title()}\n"
f"Date: {datetime.utcnow().strftime('%Y-%m-%d')}\n"
f"Submitted by: [Provider Name]\n\n"
f"This document contains commercially sensitive technical information and "
f"trade secrets submitted pursuant to Art.70 of Regulation (EU) 2024/1689 "
f"(EU AI Act). The receiving authority is bound by Art.70 confidentiality "
f"obligations. Disclosure beyond the investigation purpose is prohibited.\n"
)
def compliance_status(self) -> dict:
issues = []
if not self.legal_counsel_notified:
issues.append("Legal counsel not yet notified of investigation request")
if not self.cloud_act_risk_assessed:
issues.append("CLOUD Act risk assessment pending — check if docs on US infra")
if self.requires_source_code and not self.confidentiality_designation:
issues.append("Source code request without Art.70 designation — add before submission")
if self.is_urgent and self.status == InvestigationStatus.RECEIVED:
issues.append(f"URGENT: {self.days_remaining} days remaining, response not started")
return {
"record_id": self.record_id,
"authority": self.requesting_authority.value,
"days_remaining": self.days_remaining,
"status": self.status.value,
"is_urgent": self.is_urgent,
"compliance_issues": issues,
"ready_to_submit": len(issues) == 0 and len(self.submitted_documents) > 0,
}
NationalCompetentAuthorityRegistry
from dataclasses import dataclass
@dataclass
class NCAEntry:
member_state: str
country_code: str
authority_name: str
authority_type: str # "dedicated_ai", "dpa", "competition", "sector"
msa_designated: bool # same body also acts as MSA
contact_url: str
notes: str
class NationalCompetentAuthorityRegistry:
"""
Registry of EU member state NCAs designated under Art.68 EU AI Act.
Updated as of 2026. Check ec.europa.eu/digital-strategy for updates.
"""
REGISTRY: dict[str, NCAEntry] = {
"DE": NCAEntry(
"Germany", "DE",
"Federal Network Agency (BNetzA) + BMDV",
"sector",
msa_designated=True,
contact_url="https://www.bundesnetzagentur.de/",
notes="Multiple sector-specific bodies involved. BNetzA leads for telecom AI; financial AI → BaFin"
),
"FR": NCAEntry(
"France", "FR",
"ANSSI + Inria AI coordination",
"sector",
msa_designated=True,
contact_url="https://www.ssi.gouv.fr/",
notes="CNIL for data aspects; ANSSI for cybersecurity AI; sector authorities for domain-specific"
),
"ES": NCAEntry(
"Spain", "ES",
"AESIA — Agencia Española de Supervisión de IA",
"dedicated_ai",
msa_designated=True,
contact_url="https://www.aesia.es/",
notes="First dedicated EU AI authority. Operational since 2024. Strong enforcement mandate."
),
"NL": NCAEntry(
"Netherlands", "NL",
"ACM — Autoriteit Consument & Markt",
"competition",
msa_designated=True,
contact_url="https://www.acm.nl/",
notes="Competition authority expanded to AI Act. Human rights oversight via College voor de Rechten van de Mens"
),
"SE": NCAEntry(
"Sweden", "SE",
"IMY — Integritetsskyddsmyndigheten",
"dpa",
msa_designated=False,
contact_url="https://www.imy.se/",
notes="DPA as NCA. Separate MSA designation for product safety aspects."
),
"IT": NCAEntry(
"Italy", "IT",
"AgID — Agenzia per l'Italia Digitale",
"sector",
msa_designated=True,
contact_url="https://www.agid.gov.it/",
notes="Government digital authority. AGCM for competition aspects of AI."
),
"PL": NCAEntry(
"Poland", "PL",
"UODO — Urząd Ochrony Danych Osobowych",
"dpa",
msa_designated=False,
contact_url="https://uodo.gov.pl/",
notes="DPA as lead NCA. Ministerial coordination for sector-specific AI MSA"
),
"BE": NCAEntry(
"Belgium", "BE",
"GBA — Gegevensbeschermingsautoriteit",
"dpa",
msa_designed=False,
contact_url="https://www.gegevensbeschermingsautoriteit.be/",
notes="Home of EU institutions — coordination with AI Office proximity advantage"
),
}
@classmethod
def get_nca(cls, country_code: str) -> NCAEntry | None:
return cls.REGISTRY.get(country_code.upper())
@classmethod
def get_msa_countries(cls) -> list[str]:
"""Countries where the NCA also acts as MSA."""
return [
entry.country_code
for entry in cls.REGISTRY.values()
if entry.msa_designated
]
@classmethod
def find_by_authority_type(cls, authority_type: str) -> list[NCAEntry]:
return [
entry for entry in cls.REGISTRY.values()
if entry.authority_type == authority_type
]
@classmethod
def compliance_summary_for_deployment(cls, target_countries: list[str]) -> dict:
"""
For a given deployment scope, return the relevant NCAs and MSAs.
Use this to plan regulatory engagement before cross-border launch.
"""
result = {
"covered": [],
"not_in_registry": [],
"dedicated_ai_authorities": [],
}
for cc in target_countries:
entry = cls.get_nca(cc)
if entry:
result["covered"].append({
"country": entry.member_state,
"nca": entry.authority_name,
"msa_same_body": entry.msa_designated,
"type": entry.authority_type,
})
if entry.authority_type == "dedicated_ai":
result["dedicated_ai_authorities"].append(entry.authority_name)
else:
result["not_in_registry"].append(cc)
return result
40-Item Compliance Checklist: Art.64-70 Governance
Section 1: AI Office Readiness (Art.64-65) — Items 1-8
- 1. Identified whether you are a GPAI provider (AI Office jurisdiction) or high-risk AI provider (national MSA jurisdiction)
- 2. Reviewed AI Office published guidelines and recommendations relevant to your product category
- 3. Subscribed to AI Office publications channel (ec.europa.eu AI Office announcements)
- 4. Established internal point of contact for Art.65(4) information requests (someone authorized to respond to formal AI Office requests)
- 5. Ensured Annex XI documentation (GPAI) or Annex IV documentation (high-risk) is current and retrievable within 15 days
- 6. Mapped all AI Office CoP participation opportunities relevant to your model type (Art.56 × Art.65)
- 7. Reviewed AI Office annual reports for enforcement patterns relevant to your technical domain
- 8. Confirmed legal counsel is briefed on Art.65(4) investigative powers and response protocols
Section 2: Scientific Panel (Art.66) — Items 9-16
- 9. Assessed whether your model is approaching the 10^25 FLOP systemic risk threshold (Art.51(1)(a))
- 10. Monitored Scientific Panel qualification opinions for comparable models in your architecture class
- 11. Reviewed Art.51(2) designation criteria and capability assessment methodology (CBRN/cyber/manipulation/autonomy)
- 12. Considered voluntary notification to AI Office if approaching systemic risk threshold (Strategic value: early dialogue)
- 13. Prepared Art.66 information response package — the subset of Annex XI you can provide to Scientific Panel requests within 15 days
- 14. Verified Art.70 confidentiality designation is applied to all Scientific Panel submissions
- 15. Ensured adversarial testing methodology (Art.53(1)(a)) is documented in a form suitable for Scientific Panel review
- 16. Checked Scientific Panel opinion publication log for your technology domain
Section 3: Advisory Forum & NCA Engagement (Art.67-68) — Items 17-24
- 17. Identified your relevant national competent authority (NCA) under Art.68 for your EU establishment jurisdiction
- 18. Reviewed NCA-specific guidance and national AI Act implementation instruments for your member state
- 19. Identified Advisory Forum industry associations relevant to your product category
- 20. Monitored CoP public consultation periods (Advisory Forum channel for industry input into Art.56 CoP)
- 21. Verified NCA single point of contact for your establishment country (for filing, correspondence, notifications)
- 22. Checked whether your NCA also acts as MSA (Art.68 — some member states have separate authorities for these functions)
- 23. Reviewed NCA-specific enforcement priorities and published guidance for your Annex III sector
- 24. Established NCA contact for voluntary pre-market dialogue if deploying novel high-risk AI system
Section 4: Market Surveillance & Investigation (Art.69) — Items 25-32
- 25. Mapped which national MSA(s) have jurisdiction over your high-risk AI system deployment (by market, not just establishment)
- 26. Verified all Annex IV technical documentation is current, complete, and retrievable for MSA request within 15 days
- 27. Confirmed Art.12 logs are in MSA-readable format (structured, audit-trail verifiable)
- 28. Confirmed Art.30 post-market monitoring plan (PMM) is documented and operational
- 29. Reviewed Art.69(3) source code request provision — have internal protocol for responding if source code access requested
- 30. Ensured serious risk detection procedure is in place (Art.69 interim measure trigger → Art.79 "serious risk" threshold understanding)
- 31. Verified cross-border MSA coordination awareness — if operating in 5+ EU member states, identify lead MSA
- 32. Confirmed personnel authorized to receive MSA inspection teams are identified and briefed
Section 5: Confidentiality & CLOUD Act (Art.70) — Items 33-40
- 33. Applied Art.70 confidentiality designation to all technical documentation submitted to EU authorities
- 34. Created two-part documentation structure: public summary (for Art.32 EU Database) + confidential annex (for authority review)
- 35. Prepared Art.70 commercial sensitivity rationale memo for each confidential documentation section
- 36. Assessed CLOUD Act exposure for all documentation submitted to EU authorities (is it stored on US infrastructure?)
- 37. If CLOUD Act risk exists: documented mitigation options (EU-native storage for investigation records)
- 38. Verified Art.70 does not protect against CLOUD Act compulsion — briefed legal counsel on dual-jurisdiction risk
- 39. Ensured source code, model weights references, and adversarial test results submitted to authorities are stored on EU-jurisdiction systems
- 40. Reviewed public vs. confidential boundary for Art.32 EU Database registration (Art.70 does not shield required public registration fields)
Governance Timeline: Key Milestones
| Date | Event | Developer Action |
|---|---|---|
| Jan 2024 | AI Office pre-established by Commission Decision | AI Office operational — monitoring began immediately |
| Feb 2025 | First GPAI Code of Practice draft published | Review + participate in consultation if GPAI provider |
| Aug 2025 | Art.64-70 (+ full Chapter V) applicable | NCAs designated, Art.69 MSA powers fully operative |
| Aug 2026 | High-risk AI Art.9-15 fully applicable | Annex IV docs + conformity assessments due for all Annex III systems |
| 2026 ongoing | AI Office adversarial testing program active | GPAI systemic risk providers: expect coordination requests |
| 2027 | Annex I high-risk AI systems (regulated products) fully applicable | Machinery, medical devices, vehicles — sector conformity due |
What to Do Now: Developer Checklist by Role
If You're a GPAI Provider:
- Know your FLOP count: Determine whether you are at, near, or approaching 10^25 training FLOPs (Art.51(1)(a) threshold). If near: voluntary notification to AI Office is strategically rational.
- Follow CoP development: Subscribe to AI Office GPAI Code of Practice publications. The CoP determines your Art.56 compliance pathway.
- Prepare Annex XI documentation now: Don't wait for an Art.65(4) formal request. Have complete technical documentation for Scientific Panel requests.
- Apply Art.70 to all submissions: Every document you send to the AI Office should carry explicit Art.70 confidentiality designation.
- Store on EU infrastructure: Eliminate CLOUD Act exposure for documentation that Art.70 protects in EU proceedings.
If You're a High-Risk AI System Provider:
- Identify your lead MSA: Know which national authority has primary jurisdiction before market launch.
- Prepare Annex IV documentation: Have complete, MSA-audit-ready technical documentation in place by August 2026.
- Establish Art.12 logs: MSA inspections will request audit logs — ensure they're in place and retrievable.
- Apply Art.70 to source code submissions: If MSA requests source code under Art.69(3), apply Art.70 designation and request written confidentiality confirmation.
If You're a Deployer:
- Know your NCA: Your Art.26 monitoring obligations are supervised by your national NCA/MSA.
- Check MSA contact: For Art.26(8) FRIA and Art.26(4) monitoring reports, you may need to interact with national MSA.
- Upstream documentation: Ensure your provider's Art.32 EU Database registration is complete — MSAs may check this as part of deployment audits.
See Also
- EU AI Act Art.56: Code of Practice for GPAI Models — The Systemic Risk Compliance Pathway — CoP development process facilitated by the Art.65 AI Office
- EU AI Act Art.51: GPAI Model Classification & Systemic Risk Designation — Art.66 Scientific Panel's role in Art.51(2) designation
- EU AI Act Art.9: Risk Management System — Formal Verification & Developer Guide — Art.69 MSA inspections will review Art.9 documentation
- EU AI Act Art.12: Logging & Record-Keeping for High-Risk AI Systems Developer Guide — Art.69 MSA primary evidentiary request target
- EU NIS2 Directive + EU AI Act: The Double Compliance Burden for Critical Infrastructure Developers — NIS2 MSA × AI Act MSA dual-authority landscape