2026-04-16·12 min read·

EU AI Act Art.64-70: The EU AI Office & AI Governance Structure — Developer Guide (2026)

EU AI Act Articles 64-70 define who enforces the AI Act, how enforcement works, and what rights developers have when regulators come knocking. These governance articles are not abstract constitutional provisions — they directly determine which authority can request your technical documentation, which expert panel can classify your model as systemic risk, and what confidentiality protections apply to your source code and training data when submitted to regulators.

This guide covers the full governance chapter: the AI Office (Art.64-65), the Scientific Panel of Independent Experts (Art.66), the Advisory Forum (Art.67), National Competent Authorities (Art.68), Market Surveillance Authorities (Art.69), and Confidentiality (Art.70). For every article, we cover the developer-relevant mechanics, compliance obligations, and infrastructure jurisdiction implications under the CLOUD Act.

Art.64-70 became applicable on 2 August 2025 as part of the EU AI Act governance framework (Regulation (EU) 2024/1689). The AI Office was pre-established by Commission Decision C(2024) 2025 of 24 January 2024 — it has been operational since before the Act formally took effect.


Why Governance Articles Matter for Developers

Most developer-facing EU AI Act guides focus on obligations (Art.9, Art.10, Art.12, Art.26) while skipping governance. This is a mistake. Art.64-70 determines:

  1. Who can investigate you — Art.65 AI Office tasks vs. Art.69 national MSA powers
  2. What evidence they can compel — training data, source code, model weights, audit records
  3. Who classifies your model as systemic risk — the Art.66 Scientific Panel, not just the Commission
  4. What your rights are during investigation — Art.70 confidentiality for trade secrets
  5. Which country's authority leads — your development jurisdiction vs. your deployment market
  6. How CLOUD Act interacts — US-hosted investigation records are dual-compellable

Understanding governance is understanding your adversarial environment. This is not hypothetical: the AI Office has already initiated qualification procedures for GPAI models and issued formal investigative requests to GPAI providers.


Art.64-70 at a Glance

ArticleSubjectDeveloper Relevance
Art.64EU AI Office establishmentPrimary regulator for GPAI models — your main counterpart if you build GPAI products
Art.65AI Office tasksSets the scope of investigation + monitoring powers
Art.66Scientific Panel of Independent ExpertsCan classify your model as systemic risk; can request your technical data
Art.67Advisory ForumWhere industry shapes CoP and implementation — engagement opportunity
Art.68National Competent Authorities (NCAs)Your country-specific regulator for all non-GPAI AI systems
Art.69Market Surveillance Authorities (MSAs)Enforcement for high-risk AI systems — inspection powers, market withdrawals
Art.70ConfidentialityTrade secret protection when submitting documentation

Art.64: The EU AI Office

What Is the AI Office?

Art.64 establishes the EU AI Office within the European Commission as the primary EU-level body responsible for GPAI model oversight and cross-border AI Act enforcement coordination. The AI Office:

Why "Within the Commission" Matters

The AI Office's Commission embedding has two practical implications:

1. GPAI regulation is centralized. Unlike high-risk AI systems (which are regulated by 27 member state MSAs), GPAI model oversight goes through one body. If you are a GPAI provider deploying across the EU, you have one primary regulatory counterpart: the AI Office in Brussels.

2. Commission investigative powers apply. The AI Office can invoke Commission powers under Art.65(4) including access to premises, documents, and personnel. This is a stronger investigative toolkit than most national MSAs have for domestic enforcement.

AI Office for Non-GPAI Developers

If you build high-risk AI systems (not GPAI models), your primary regulators are the national MSAs (Art.69), not the AI Office directly. The AI Office coordinates nationally but does not conduct day-to-day high-risk AI enforcement. However, if your high-risk AI system uses a GPAI model as a component, the Art.65 monitoring of that upstream GPAI provider is directly relevant to your supply chain compliance.


Art.65: Tasks of the AI Office

Art.65 defines what the AI Office actually does. For GPAI providers, these are the specific AI Office activities that create compliance obligations:

Core GPAI Oversight Tasks (Art.65(1))

Task 1: GPAI Model Monitoring The AI Office continuously monitors GPAI models for compliance with Chapter V (Art.51-56). This includes:

Task 2: Code of Practice Facilitation (Art.56 Link) The AI Office facilitates the development and implementation of Codes of Practice (Art.56). Specifically:

Task 3: Adversarial Testing Coordination (Art.53(1)(a) Link) For GPAI models with systemic risk, the AI Office coordinates adversarial testing. It can:

Task 4: Guidance and Recommendations The AI Office issues:

Task 5: Annual Reporting The AI Office publishes an annual report covering:

Investigative Powers (Art.65(4))

This is the provision that matters most in an adversarial context. The AI Office can:

For developers: if the AI Office issues a formal information request under Art.65(4), you have limited time to respond (typically 15-30 days). Having your Annex XI documentation in order before a request arrives is your only viable defense — scrambling to compile documentation after receiving an Art.65(4) request is a compliance failure mode.

AI Office Powers vs. National MSA Powers

PowerAI Office (Art.65)National MSA (Art.69)
JurisdictionGPAI models (EU-wide)High-risk AI in territory
Investigation triggerOwn initiative or complaintComplaint or market surveillance
Document accessYes (Art.65(4))Yes (Art.69(3))
On-site inspectionYesYes
Interim measuresYes (imminent systemic risk)Yes (serious risk)
Penalty authorityArt.101 (via Commission)National fine proceedings
Cross-border coordinationDirectThrough AI Board

Art.66: Scientific Panel of Independent Experts

What the Scientific Panel Does

The Scientific Panel of Independent Experts is the AI Office's technical advisory body with a specific and critical function: it provides independent expert opinions on whether a GPAI model qualifies as posing systemic risk under Art.51(2).

Art.66 creates a panel with:

The Art.51(2) Designation Pathway

The Scientific Panel's most consequential power is its role in Art.51(2) systemic risk designation:

Art.51(2) Designation Process:
1. GPAI provider triggers Art.51(1)(a) threshold: ≥10^25 FLOPs training compute
   OR
   AI Office identifies potential systemic risk through Art.65 monitoring
   
2. Scientific Panel (Art.66) conducts independent assessment:
   - Requests Annex XI documentation from provider
   - Reviews capability evaluations (benchmarks + adversarial tests)
   - Assesses capability criteria: CBRN, cybersecurity, manipulation, autonomous replication
   - Provides written opinion to Commission
   
3. Commission issues Art.51(2) designation decision:
   - Based on Scientific Panel opinion
   - Provider gets 15-day advance notice + right to respond
   - Decision published in Official Journal
   
4. Provider now subject to Art.52-56 Chapter V obligations

Scientific Panel Data Access

For GPAI providers: The Scientific Panel can directly request:

These requests bypass normal national authority channels — the Scientific Panel is an EU-level body that can request data directly from GPAI providers across member states.

The Voluntary Notification Option

Art.66 includes a voluntary notification mechanism: GPAI providers who believe their model may be approaching or exceeding the systemic risk threshold can voluntarily notify the AI Office before reaching the threshold. Benefits:

For developers building large-scale foundation models: voluntary notification is strategically rational if you believe your next training run will cross 10^25 FLOPs.


Art.67: Advisory Forum

Structure and Composition

The Advisory Forum provides stakeholder input into EU AI Act implementation, including:

Advisory Forum Tasks

  1. Implementation guidance: Provides non-binding opinions on EU AI Act interpretation for the Commission and AI Office
  2. Code of Practice input: Contributes to CoP content development under Art.56 — the Advisory Forum is one of the stakeholder channels for industry participation in CoP drafting
  3. Annual report input: Provides observations that inform the AI Office annual report (Art.65)
  4. Technical standards liaison: Coordinates with CEN-CENELEC and ISO/IEC JTC 1/SC 42 on AI standards referenced in Art.40

Developer Engagement Opportunity

The Advisory Forum is the formal channel for industry to shape AI regulation. For developers building GPAI-related products or high-risk AI systems, the Advisory Forum is worth monitoring because:

Practical action: Follow EU AI Office publications on Advisory Forum proceedings. Register for public consultation periods on CoP development. Industry associations (e.g., CCIA, DIGITALEUROPE) participate in the Forum and often publish consultation responses.


Art.68: National Competent Authorities (NCAs) and Single Points of Contact

The NCA Designation Requirement

Art.68 requires each EU member state to designate:

  1. One or more National Competent Authorities (NCAs) with overall responsibility for EU AI Act implementation at national level
  2. A single point of contact for communication with the Commission and AI Office
  3. Sufficient technical expertise, staff and resources — the NCA must be adequately resourced

Current NCA Landscape (as of 2026)

Member StateDesignated NCANotes
Germany 🇩🇪BNetzA / BMDV coordinationMultiple sector-specific bodies
France 🇫🇷ANSSI + Inria coordinationCyber + technical expertise
Netherlands 🇳🇱ACM (Autoriteit Consument & Markt)Competition authority expanded
Sweden 🇸🇪IMY (Datainspektionen)Data protection overlap
Spain 🇪🇸AESIA (Agencia Española de Supervisión de IA)Dedicated AI authority
Italy 🇮🇹AgID coordinationGovernment digital authority
Poland 🇵🇱UODO + ministerial coordinationDP authority + ministry

Spain's AESIA is notable as the only EU member state to have established a dedicated AI authority before the August 2025 application date. Most member states have designated existing authorities (data protection authorities, competition authorities, sector-specific regulators) as NCAs.

Implications for Developers

If you operate in one member state: your NCA is your primary national regulatory contact for:

If you operate cross-border: the NCA of your establishment location (Art.2 jurisdiction rules) is typically your lead authority for national coordination purposes, but the AI Office handles all GPAI matters centrally regardless of establishment location.

NCA as coordination hub: NCAs coordinate between the AI Office (for GPAI matters) and national MSAs (for high-risk AI matters). They are the interface, not the primary investigator for either regime.


Art.69: Market Surveillance Authorities (MSAs)

MSA Powers — The Enforcement Arm for High-Risk AI

Art.69 establishes the enforcement framework for high-risk AI systems (Annex III). MSAs are the national authorities responsible for market surveillance — the "regulatory police" that conduct inspections, request documentation, and impose corrective measures.

Core MSA powers under Art.69:

Power 1: Document and Data Access MSAs can formally request:

Power 2: On-Site Inspection MSAs can conduct inspections with advance notice:

Power 3: Serious Risk Interim Measures If an MSA identifies a serious risk, it can:

Power 4: Cross-Border Escalation MSAs participate in the AI Board cross-border coordination mechanism. If a high-risk AI system operates in multiple member states:

The "Source Code" Request Provision

Art.69(3) is the provision that gets developers' attention: MSAs can request source code access on a reasoned request. This provision has specific constraints:

  1. Reasoned request required — the MSA must document why source code access is necessary (not routine)
  2. Proportionality constraint — less intrusive means (documentation, logs) must be insufficient
  3. Confidentiality applies — Art.70 protections apply to source code provided to MSAs
  4. No public disclosure — source code provided to MSAs cannot be shared beyond the investigation

In practice, source code requests are reserved for cases where the MSA suspects conformity assessment fraud or where documentation alone cannot establish compliance. For standard market surveillance, documentation (Annex IV technical docs + Art.12 logs + Art.30 PMM records) is the normal evidentiary basis.

MSA Penalty Authority

MSAs administer national fine proceedings under Art.99-101 framework:

Violation CategoryMaximum Fine
Art.5 prohibited practices€35M or 7% of global turnover
High-risk AI Art.9-15 violations€15M or 3% of global turnover
Incorrect information to authorities€7.5M or 1.5% of global turnover
GPAI / Chapter V violations€15M or 3% (Art.101)

The actual fine is imposed by national MSA proceedings, not directly by the AI Office (except for GPAI matters where the Commission can act directly under Art.65(4)).


Art.70: Confidentiality — Your Rights When Submitting Technical Docs

The Confidentiality Framework

Art.70 establishes that all parties handling EU AI Act information — the Commission, AI Office, Scientific Panel, NCAs, MSAs — must maintain confidentiality for:

What Art.70 Protects

Protected under Art.70:

NOT fully protected under Art.70:

Art.70 in Practice: Confidential Designation

When submitting technical documentation to any EU AI Act authority, you should:

  1. Designate confidential sections explicitly: Mark trade secrets with "Confidential under Art.70 EU AI Act (Regulation 2024/1689)"
  2. Separate public and private elements: Create a two-part documentation structure — a public summary (for Art.32 EU Database) and a confidential annex (for authority review)
  3. Document your commercial sensitivity rationale: A brief memo explaining WHY each section is commercially sensitive strengthens Art.70 protection if challenged
  4. Request written confidentiality confirmation: When submitting to an MSA or the AI Office, request explicit acknowledgment that Art.70 applies

Critical Limitation: Art.70 Does NOT Address CLOUD Act

Art.70 protects against EU authority disclosure — it does not protect against US law compulsion. If your Art.70-protected documentation is stored on US-controlled infrastructure (AWS/Azure/GCP), the US CLOUD Act can compel production to US federal law enforcement. Art.70 protections exist in EU law; the CLOUD Act operates under US law and is not bound by EU confidentiality provisions.

This creates a specific risk scenario:

EU-native infrastructure eliminates this exposure. If your technical documentation, source code, and investigation correspondence are hosted on EU-sovereignty infrastructure (no US parent company with CLOUD Act reach), Art.70 protection is the only applicable access regime.


Developer Impact Matrix: Art.64-70 by Role

RolePrimary ContactKey ProvisionsDocumentation Required
GPAI Provider (non-systemic)AI Office (Art.64)Art.64-65, Art.70Annex XI (voluntary baseline)
GPAI Provider (systemic risk)AI Office + Scientific PanelArt.64-66, Art.70Annex XI+XII (mandatory, Art.52-55)
High-Risk AI ProviderNational MSA (Art.69)Art.68-70Annex IV, Art.12 logs, Art.30 PMM
High-Risk AI DeployerNational MSA (Art.69)Art.68-70Art.26 monitoring records
General-Purpose AI DeployerDepends on use caseArt.68, Art.70Context-dependent (Art.6(3) self-classification)
Foundation Model Fine-TunerAI Office or MSAArt.64-65, Art.68-69Depends on output classification

CLOUD Act × Art.64-70: The Dual Jurisdiction Problem

Art.64-70 creates a detailed governance framework for EU-side access to AI documentation. But it operates in parallel with US CLOUD Act obligations for providers using US infrastructure. The intersection creates dual-compellability risks:

Record Type → Dual Compellability Analysis

Record TypeAI Act Authority AccessCLOUD Act ExposureRisk Level
Annex XI GPAI technical docsAI Office (Art.65) + Scientific Panel (Art.66)YES — if on US infraHIGH
Source code (Art.69(3) production)National MSA investigationYES — if on US infraHIGH
Adversarial testing reports (Art.53)AI Office + Scientific PanelYES — if on US infraHIGH
Correspondence with AI OfficeArt.70 protected (EU side)YES — if on US emailMEDIUM
Art.12 audit logs (high-risk AI)National MSA (Art.69)YES — if on US infraMEDIUM
Art.30 PMM recordsNational MSA (Art.69)YES — if on US infraMEDIUM
QMS documentation (Art.17)National MSA (Art.69)YES — if on US infraMEDIUM

The Specific GPAI Problem

GPAI providers with systemic risk are required to maintain extensive technical documentation (Annex XI/XII) that includes model weights references, capability evaluations, adversarial testing results, and infrastructure details. This documentation:

A US authority investigating a GPAI provider's model for national security reasons could compel the exact same Annex XI documentation that the EU AI Office holds under Art.70 protection — through the CLOUD Act rather than EU channels.

EU-Native Infrastructure as Single-Regime Defense

EU-native PaaS infrastructure eliminates CLOUD Act exposure:

With EU-native hosting, Art.70 is the only applicable access regime for your submitted documentation. There is no parallel CLOUD Act track because there is no US infrastructure to compel.


Python Implementation

AIOfficeInvestigationRecord

from dataclasses import dataclass, field
from datetime import datetime, timedelta
from enum import Enum
from typing import Optional


class InvestigationStatus(Enum):
    RECEIVED = "received"
    UNDER_REVIEW = "under_review"
    RESPONDED = "responded"
    ESCALATED = "escalated"
    CLOSED = "closed"


class RequestingAuthority(Enum):
    AI_OFFICE = "ai_office"
    SCIENTIFIC_PANEL = "scientific_panel"
    NATIONAL_MSA = "national_msa"
    NCA = "national_competent_authority"
    ADVISORY_FORUM = "advisory_forum"  # rare, for consultation purposes


@dataclass
class AIOfficeInvestigationRecord:
    """
    Tracks regulatory investigation requests under EU AI Act Art.64-70.
    Art.70 confidentiality applies to all submitted documentation.
    """

    record_id: str
    received_date: datetime
    requesting_authority: RequestingAuthority
    authority_reference: str  # official case/reference number
    request_type: str  # "annex_xi_docs", "source_code_art69_3", "adversarial_test_results", etc.
    response_deadline_days: int  # typically 15-30 days
    confidentiality_designation: bool = True  # Art.70 applies by default
    art70_designation_memo: str = ""
    submitted_documents: list[str] = field(default_factory=list)
    status: InvestigationStatus = InvestigationStatus.RECEIVED
    legal_counsel_notified: bool = False
    cloud_act_risk_assessed: bool = False

    @property
    def deadline(self) -> datetime:
        return self.received_date + timedelta(days=self.response_deadline_days)

    @property
    def days_remaining(self) -> int:
        return (self.deadline - datetime.utcnow()).days

    @property
    def is_urgent(self) -> bool:
        return self.days_remaining <= 5

    @property
    def requires_source_code(self) -> bool:
        return "source_code" in self.request_type

    def art70_submission_header(self) -> str:
        """Generate confidentiality header for document submissions."""
        return (
            f"CONFIDENTIAL — Art.70 EU AI Act (Regulation 2024/1689)\n"
            f"Reference: {self.authority_reference}\n"
            f"Submitted to: {self.requesting_authority.value.replace('_', ' ').title()}\n"
            f"Date: {datetime.utcnow().strftime('%Y-%m-%d')}\n"
            f"Submitted by: [Provider Name]\n\n"
            f"This document contains commercially sensitive technical information and "
            f"trade secrets submitted pursuant to Art.70 of Regulation (EU) 2024/1689 "
            f"(EU AI Act). The receiving authority is bound by Art.70 confidentiality "
            f"obligations. Disclosure beyond the investigation purpose is prohibited.\n"
        )

    def compliance_status(self) -> dict:
        issues = []
        if not self.legal_counsel_notified:
            issues.append("Legal counsel not yet notified of investigation request")
        if not self.cloud_act_risk_assessed:
            issues.append("CLOUD Act risk assessment pending — check if docs on US infra")
        if self.requires_source_code and not self.confidentiality_designation:
            issues.append("Source code request without Art.70 designation — add before submission")
        if self.is_urgent and self.status == InvestigationStatus.RECEIVED:
            issues.append(f"URGENT: {self.days_remaining} days remaining, response not started")

        return {
            "record_id": self.record_id,
            "authority": self.requesting_authority.value,
            "days_remaining": self.days_remaining,
            "status": self.status.value,
            "is_urgent": self.is_urgent,
            "compliance_issues": issues,
            "ready_to_submit": len(issues) == 0 and len(self.submitted_documents) > 0,
        }

NationalCompetentAuthorityRegistry

from dataclasses import dataclass


@dataclass
class NCAEntry:
    member_state: str
    country_code: str
    authority_name: str
    authority_type: str  # "dedicated_ai", "dpa", "competition", "sector"
    msa_designated: bool  # same body also acts as MSA
    contact_url: str
    notes: str


class NationalCompetentAuthorityRegistry:
    """
    Registry of EU member state NCAs designated under Art.68 EU AI Act.
    Updated as of 2026. Check ec.europa.eu/digital-strategy for updates.
    """

    REGISTRY: dict[str, NCAEntry] = {
        "DE": NCAEntry(
            "Germany", "DE",
            "Federal Network Agency (BNetzA) + BMDV",
            "sector",
            msa_designated=True,
            contact_url="https://www.bundesnetzagentur.de/",
            notes="Multiple sector-specific bodies involved. BNetzA leads for telecom AI; financial AI → BaFin"
        ),
        "FR": NCAEntry(
            "France", "FR",
            "ANSSI + Inria AI coordination",
            "sector",
            msa_designated=True,
            contact_url="https://www.ssi.gouv.fr/",
            notes="CNIL for data aspects; ANSSI for cybersecurity AI; sector authorities for domain-specific"
        ),
        "ES": NCAEntry(
            "Spain", "ES",
            "AESIA — Agencia Española de Supervisión de IA",
            "dedicated_ai",
            msa_designated=True,
            contact_url="https://www.aesia.es/",
            notes="First dedicated EU AI authority. Operational since 2024. Strong enforcement mandate."
        ),
        "NL": NCAEntry(
            "Netherlands", "NL",
            "ACM — Autoriteit Consument & Markt",
            "competition",
            msa_designated=True,
            contact_url="https://www.acm.nl/",
            notes="Competition authority expanded to AI Act. Human rights oversight via College voor de Rechten van de Mens"
        ),
        "SE": NCAEntry(
            "Sweden", "SE",
            "IMY — Integritetsskyddsmyndigheten",
            "dpa",
            msa_designated=False,
            contact_url="https://www.imy.se/",
            notes="DPA as NCA. Separate MSA designation for product safety aspects."
        ),
        "IT": NCAEntry(
            "Italy", "IT",
            "AgID — Agenzia per l'Italia Digitale",
            "sector",
            msa_designated=True,
            contact_url="https://www.agid.gov.it/",
            notes="Government digital authority. AGCM for competition aspects of AI."
        ),
        "PL": NCAEntry(
            "Poland", "PL",
            "UODO — Urząd Ochrony Danych Osobowych",
            "dpa",
            msa_designated=False,
            contact_url="https://uodo.gov.pl/",
            notes="DPA as lead NCA. Ministerial coordination for sector-specific AI MSA"
        ),
        "BE": NCAEntry(
            "Belgium", "BE",
            "GBA — Gegevensbeschermingsautoriteit",
            "dpa",
            msa_designed=False,
            contact_url="https://www.gegevensbeschermingsautoriteit.be/",
            notes="Home of EU institutions — coordination with AI Office proximity advantage"
        ),
    }

    @classmethod
    def get_nca(cls, country_code: str) -> NCAEntry | None:
        return cls.REGISTRY.get(country_code.upper())

    @classmethod
    def get_msa_countries(cls) -> list[str]:
        """Countries where the NCA also acts as MSA."""
        return [
            entry.country_code
            for entry in cls.REGISTRY.values()
            if entry.msa_designated
        ]

    @classmethod
    def find_by_authority_type(cls, authority_type: str) -> list[NCAEntry]:
        return [
            entry for entry in cls.REGISTRY.values()
            if entry.authority_type == authority_type
        ]

    @classmethod
    def compliance_summary_for_deployment(cls, target_countries: list[str]) -> dict:
        """
        For a given deployment scope, return the relevant NCAs and MSAs.
        Use this to plan regulatory engagement before cross-border launch.
        """
        result = {
            "covered": [],
            "not_in_registry": [],
            "dedicated_ai_authorities": [],
        }

        for cc in target_countries:
            entry = cls.get_nca(cc)
            if entry:
                result["covered"].append({
                    "country": entry.member_state,
                    "nca": entry.authority_name,
                    "msa_same_body": entry.msa_designated,
                    "type": entry.authority_type,
                })
                if entry.authority_type == "dedicated_ai":
                    result["dedicated_ai_authorities"].append(entry.authority_name)
            else:
                result["not_in_registry"].append(cc)

        return result

40-Item Compliance Checklist: Art.64-70 Governance

Section 1: AI Office Readiness (Art.64-65) — Items 1-8

Section 2: Scientific Panel (Art.66) — Items 9-16

Section 3: Advisory Forum & NCA Engagement (Art.67-68) — Items 17-24

Section 4: Market Surveillance & Investigation (Art.69) — Items 25-32

Section 5: Confidentiality & CLOUD Act (Art.70) — Items 33-40


Governance Timeline: Key Milestones

DateEventDeveloper Action
Jan 2024AI Office pre-established by Commission DecisionAI Office operational — monitoring began immediately
Feb 2025First GPAI Code of Practice draft publishedReview + participate in consultation if GPAI provider
Aug 2025Art.64-70 (+ full Chapter V) applicableNCAs designated, Art.69 MSA powers fully operative
Aug 2026High-risk AI Art.9-15 fully applicableAnnex IV docs + conformity assessments due for all Annex III systems
2026 ongoingAI Office adversarial testing program activeGPAI systemic risk providers: expect coordination requests
2027Annex I high-risk AI systems (regulated products) fully applicableMachinery, medical devices, vehicles — sector conformity due

What to Do Now: Developer Checklist by Role

If You're a GPAI Provider:

  1. Know your FLOP count: Determine whether you are at, near, or approaching 10^25 training FLOPs (Art.51(1)(a) threshold). If near: voluntary notification to AI Office is strategically rational.
  2. Follow CoP development: Subscribe to AI Office GPAI Code of Practice publications. The CoP determines your Art.56 compliance pathway.
  3. Prepare Annex XI documentation now: Don't wait for an Art.65(4) formal request. Have complete technical documentation for Scientific Panel requests.
  4. Apply Art.70 to all submissions: Every document you send to the AI Office should carry explicit Art.70 confidentiality designation.
  5. Store on EU infrastructure: Eliminate CLOUD Act exposure for documentation that Art.70 protects in EU proceedings.

If You're a High-Risk AI System Provider:

  1. Identify your lead MSA: Know which national authority has primary jurisdiction before market launch.
  2. Prepare Annex IV documentation: Have complete, MSA-audit-ready technical documentation in place by August 2026.
  3. Establish Art.12 logs: MSA inspections will request audit logs — ensure they're in place and retrievable.
  4. Apply Art.70 to source code submissions: If MSA requests source code under Art.69(3), apply Art.70 designation and request written confidentiality confirmation.

If You're a Deployer:

  1. Know your NCA: Your Art.26 monitoring obligations are supervised by your national NCA/MSA.
  2. Check MSA contact: For Art.26(8) FRIA and Art.26(4) monitoring reports, you may need to interact with national MSA.
  3. Upstream documentation: Ensure your provider's Art.32 EU Database registration is complete — MSAs may check this as part of deployment audits.

See Also