2026-04-24·12 min read·sota.io team

EU AI Act Art.68: AI Regulatory Sandboxes — National Establishment Obligations, Provider Exemptions, and Compliance Pathway (2026)

EU AI Act Article 68 is the legislative bridge between the governance and enforcement framework established in Art.57–67 and the market reality that AI systems — particularly novel, high-risk, and general-purpose systems — often cannot be meaningfully assessed for compliance before they have been tested in conditions approaching real-world deployment. Art.68 creates the AI regulatory sandbox: a structured, NCA-supervised testing environment in which providers can develop and test AI systems under regulatory guidance, with partial exemptions from certain AI Act obligations, before the system is placed on the market or put into service.

The regulatory sandbox concept is not new to EU product regulation. Pharmaceutical clinical trials, financial services regulatory sandboxes under FinTech frameworks, and GDPR sandbox arrangements for data-intensive services have established the precedent: controlled experimentation under regulatory supervision, with transparent accountability, produces better compliance outcomes than forcing providers to navigate requirements in isolation. Art.68 applies this logic to AI systems specifically, creating a framework that serves innovation while preserving the risk-proportionate oversight that the AI Act's high-risk classification system demands.

For developers, Art.68 is relevant in two directions. If your system is genuinely novel — a high-risk AI system for which the conformity assessment pathway is unclear, or a GPAI model that raises new questions about systemic risk thresholds — the sandbox offers a structured regulatory dialogue that can resolve compliance questions before market placement. If your system is destined for a Member State that has established a national sandbox, understanding the eligibility criteria and the scope of exemptions available during participation is material to your deployment timeline planning.

Art.68 became applicable on 2 August 2025 as part of the phased entry into force of Regulation (EU) 2024/1689.


Art.68 in the Chapter IX Enforcement Architecture

Art.68 marks a transition within Chapter IX from the reactive enforcement framework (market surveillance, safeguard procedures, incident reporting) to a forward-looking compliance support mechanism:

ArticleFunctionRelationship to Art.68
Art.57NCA designation — the authority that operates the sandboxNCAs designated under Art.57 are responsible for establishing and operating Art.68 sandboxes
Art.58NCA investigation powers — the enforcement toolkitSuspended or modified for participating providers during sandbox period
Art.60EU AI database — pre-market registration obligationArt.68 sandbox participation may defer certain registration obligations pending post-sandbox conformity confirmation
Art.64Market surveillance access to data and documentationArt.68 creates a cooperative alternative to adversarial Art.64 access — sandbox NCA supervision replaces post-market access demands
Art.65Serious incident reporting obligationsContinues to apply during sandbox for actual incidents; NCA involvement is direct rather than post-hoc notification
Art.66Market surveillance and enforcementDoes not apply to sandbox participants for issues arising from tested functionality — protection exists only within scope of sandbox plan
Art.67Union safeguard procedure — escalation for conflicting NCA measuresCross-border sandbox arrangements under Art.68(3) reduce the risk of divergent national enforcement by aligning regulatory dialogue upfront
Art.68AI regulatory sandbox — controlled testing under NCA supervisionThis guide

Art.68(1): Member State Obligation to Establish AI Regulatory Sandboxes

Art.68(1) imposes a mandatory establishment obligation on Member States: each Member State shall ensure that its competent authority establishes at least one AI regulatory sandbox at national level. This is not discretionary — every Member State must have at least one operational sandbox within twelve months of the date on which the relevant Chapter IX provisions become applicable (the primary date being 2 August 2025, with the sandbox establishment obligation flowing from that).

The sandbox must be operational — not merely announced. An operational sandbox means:

Member States that already operate pre-AI Act innovation sandboxes (for example, financial services regulatory sandbox frameworks in Ireland, the Netherlands, and Germany) are not exempt from the Art.68 obligation — they must either adapt existing frameworks to comply with Art.68's requirements or establish dedicated AI sandboxes.

The AI Board (Art.59) monitors sandbox establishment across Member States and publishes annual reports on sandbox activity. This creates accountability: Member States that fail to meet the establishment timeline face reputational and institutional pressure through the AI Board reporting mechanism.


Art.68(2): Sandbox Participation Criteria and Selection

Art.68(2) establishes the criteria that NCAs must apply when evaluating provider applications to participate in the sandbox. The criteria are designed to ensure that sandbox resources — which are finite, given NCA capacity constraints — are allocated to providers and AI systems where sandbox participation delivers genuine regulatory value.

Eligibility criteria under Art.68(2):

CriterionAssessmentNotes
Novelty of AI systemDoes the system raise compliance questions not adequately addressed by existing guidance or conformity assessment standards?Generic applications of well-understood technology do not qualify — sandbox is for genuinely novel cases
Meaningful safety/performance assessmentCan the NCA meaningfully assess the system's safety and performance in controlled conditions?If safety can only be assessed at full deployment scale, sandbox may be inappropriate
Regulatory question identificationHas the provider articulated specific regulatory questions it seeks to resolve through sandbox participation?Applications must include a structured regulatory question inventory
Proportionate scopeIs the testing scope defined and limited — a bounded set of use cases, users, and data types?Open-ended testing without defined scope is not sandbox participation
Provider accountability infrastructureDoes the provider have internal governance structures capable of maintaining sandbox commitments?Financial resources, technical competency, and incident response capability

SME and start-up priority: Art.68(2) explicitly requires NCAs to prioritise applications from small and medium-sized enterprises and start-ups, provided they meet the substantive eligibility criteria. This is a deliberate policy choice: SMEs and start-ups face disproportionate compliance costs relative to large providers, and the sandbox is intended to reduce the barrier to regulated market entry for innovation-stage companies that cannot afford lengthy conformity assessment processes.

Application timeline: NCAs must respond to sandbox applications within a defined period (Member States set specific timelines; the AI Board publishes guidance on reasonable application processing windows). Silence is not acceptance — providers must receive explicit acceptance or a reasoned rejection.

Rejection and appeal: Rejected applications must be accompanied by a reasoned decision. Art.68(2) does not create a formal appeal procedure at the EU level, but MS administrative law applies — rejected providers may challenge NCA rejection decisions through national administrative review or judicial proceedings.


Art.68(3): Scope of Provider Exemptions During Sandbox Participation

The regulatory value of sandbox participation derives substantially from the exemptions from certain AI Act obligations that apply while a provider is engaged in NCA-supervised testing. Without meaningful exemptions, sandbox participation would impose compliance costs without the learning benefits that justify the framework.

Art.68(3) defines the scope of these exemptions. They are partial and bounded — not a blanket suspension of the AI Act for sandbox participants.

Exemptions that apply during sandbox participation:

Obligations that continue during sandbox participation:

ObligationContinuesRationale
Fundamental rights safeguards (Art.5)YesProhibited practices cannot be tested in sandbox — no exemption for biometric categorisation, subliminal manipulation, social scoring
Serious incident reporting (Art.65)Yes, to NCADirect NCA oversight replaces post-hoc notification — incidents during sandbox must be reported to supervising NCA immediately
GDPR complianceYes (with sandbox-specific provisions)Personal data processed during sandbox remains subject to GDPR — Art.68(6) provides special conditions but does not suspend GDPR
Product liability (national law)YesProvider liability for harms caused during testing is not suspended
Basic transparency to test subjects (Art.50)YesIndividuals interacting with sandboxed AI systems must be informed of the experimental nature of the system
NCA cooperationYes (intensified)Sandbox participation requires active cooperation with NCA supervision — documentation, reporting, access obligations

The sandbox plan: Art.68(3) requires that the scope of available exemptions be specified in a sandbox plan agreed between the provider and the NCA before sandbox commencement. The sandbox plan defines: the AI system under testing, the use cases covered, the testing timeline, the regulatory questions to be addressed, the data types to be processed, the exemptions that apply, and the conditions for NCA oversight. Exemptions outside the sandbox plan's scope do not apply.


Art.68(4): Cross-Border Sandbox Arrangements

Art.68(4) enables two or more Member States to establish joint cross-border sandbox arrangements — a single sandbox framework operating across multiple national jurisdictions simultaneously. This is particularly valuable for:

Cross-border sandbox arrangements under Art.68(4) require:

Relationship to Art.67: Cross-border sandboxes under Art.68(4) reduce the likelihood of Art.67 Union safeguard procedure triggers — by aligning regulatory dialogue across Member States during development and testing, providers and NCAs avoid the scenarios where one Member State's enforcement measure is contested by another.


Art.68(5): Liability and Responsibility Framework During Sandbox Testing

Art.68(5) addresses the liability allocation question that arises uniquely in sandbox contexts: when a provider is operating under NCA supervision with reduced compliance obligations, who bears responsibility for harms caused during testing?

The AI Act's answer is clear: provider liability continues in full during sandbox participation. The sandbox does not transfer liability to the NCA. The NCA's supervisory role is not a guarantee of safety — it is a structured dialogue to identify and mitigate risks. If a sandboxed AI system causes harm to a test subject or third party:

NCA liability limitation: Art.68(5) also limits NCA liability. An NCA that supervises a sandbox participant is not jointly liable for harms caused by the sandboxed system, provided the NCA operated the sandbox in good faith and within the scope of Art.68's framework. This limitation is necessary to prevent sandbox oversight from becoming an implicit guarantee of safety — NCAs would refuse to operate sandboxes if doing so exposed them to co-liability for provider harms.

Practical liability framework for sandbox participants:

Risk TypeResponsibilityMitigation
Personal injury from AI system errorProvider (full)Sandbox insurance, limited test subject pool, monitored test conditions
Data breach during sandbox testingProvider (full, under GDPR)Sandbox-specific data processing agreement, minimised data sets, encryption
Discrimination or unfair outcomeProvider (full)Bias testing protocols as part of sandbox plan, diverse test subject selection
NCA guidance that proves incorrectNCA (limited, good faith)Document NCA guidance received; demonstrate reliance on formal sandbox plan provisions
Third-party harm outside test scopeProvider (full)Strictly limit real-world exposure to defined sandbox conditions

Art.68(6): Personal Data Processing in AI Regulatory Sandboxes

Art.68(6) is one of the most technically significant provisions for AI developers, because it directly addresses the tension between GDPR's data minimisation and purpose limitation principles and the training and testing data requirements of AI systems in development.

Training high-quality AI systems requires large, representative datasets. For high-risk AI systems — healthcare diagnostics, creditworthiness assessment, employment screening tools — the most relevant training and testing data is often personal data that would not ordinarily be processable for AI development purposes under GDPR. Art.68(6) creates a controlled mechanism to address this tension within the sandbox.

Art.68(6) special conditions for personal data processing:

Sandbox data vs. production data: Art.68(6) does not authorise use of live production data from deployed systems for sandbox testing. The provision covers controlled testing data under NCA supervision — not harvesting production data retroactively into a sandbox.


Art.68(7): Post-Sandbox Compliance Pathway

Art.68(7) establishes the post-sandbox pathway — the mechanism by which sandbox participation connects to the conformity assessment and market placement process.

Upon conclusion of the sandbox period, the supervising NCA issues a sandbox completion report documenting:

The sandbox completion report is not a market approval or a conformity declaration. It does not replace the conformity assessment under Art.43. However, the report serves several functions in the post-sandbox compliance process:

Functions of the sandbox completion report:

FunctionHow UsedLimitation
Conformity assessment evidenceSubmitted to notified body (where required under Art.43) as evidence of NCA-supervised testingNotified body makes independent assessment — report is evidence, not approval
Technical documentation inputTesting data, NCA feedback, and sandbox period documentation contribute to the Art.11 technical documentation packageDocumentation must be completed and finalised post-sandbox
Risk management system inputSandbox-identified risks feed the Art.9 risk management system for the post-market systemRisk management continues; sandbox reduces unknown unknowns
NCA relationship foundationEstablishes working relationship with supervising NCA for post-market monitoring under Art.72NCAs that supervised sandbox have context for proportionate oversight post-market
GPAI systemic risk assessmentFor GPAI model providers, sandbox testing with AI Office involvement can inform Art.52 threshold assessmentAI Office involvement in sandbox not guaranteed — depends on model characteristics

Post-sandbox timeline: there is no mandatory waiting period between sandbox completion and market placement application. Providers can submit conformity assessment documentation immediately following sandbox completion. However, if the sandbox completion report identifies material compliance gaps, providers must address those gaps before proceeding — NCA follow-up assessment may be required.


Art.68(8): Commission Coordination and Harmonised Guidelines

Art.68(8) assigns the Commission a coordination role in the AI regulatory sandbox framework to prevent fragmentation — 27 Member States operating 27 different sandbox frameworks with different criteria, timelines, and exemption scopes would undermine the AI Act's internal market harmonisation objective.

Under Art.68(8), the Commission:

AI Board sandbox function: Art.68(8) gives the AI Board a specific mandate to support sandbox harmonisation — it reviews Member State sandbox frameworks, identifies divergences, and recommends alignment measures. This makes the AI Board a de facto regulator of the regulatory sandboxes themselves.


Art.68(9): Sandbox Framework for GPAI Models and Systemic Risk Assessment

Art.68(9) extends the sandbox framework to general-purpose AI models — specifically addressing the intersection between Art.68 sandboxes and the systemic risk assessment obligations under Art.52 and Art.55.

For GPAI model providers who are uncertain whether their model meets the Art.52 systemic risk thresholds (10^25 FLOPs, or other criteria established by Commission delegated acts), a sandbox arrangement with AI Office involvement can serve as a structured methodology for conducting the systemic risk assessment.

Key differences from high-risk AI system sandboxes under Art.68(1)-(8):

DimensionHigh-Risk AI System SandboxGPAI Model Sandbox (Art.68(9))
Supervising authorityNational competent authority (Art.57)AI Office (Art.55 enforcement link)
Scope of exemptionsConformity assessment, CE marking, documentation pre-completionArt.52 threshold determination, AI Office access obligations
Cross-border dimensionAvailable under Art.68(4)Inherent — GPAI models are Union-wide by nature
Post-sandbox outcomeSandbox completion report + conformity pathwaySystemic risk determination + Art.53 obligation applicability
Sandbox plan contentUse case, data types, regulatory questionsModel architecture, training data, capabilities, FLOPs calculation, benchmark results

CLOUD Act Implications for Sandbox Operations

For AI system providers incorporated in or operationally dependent on US-based infrastructure, Art.68 sandbox operations raise specific CLOUD Act conflict considerations that do not arise for purely EU-based providers.

The core tension: Art.68(6) authorises personal data processing under NCA supervision and GDPR protections. Simultaneously, the US Clarifying Lawful Overseas Use of Data (CLOUD) Act enables US government authorities to compel US-incorporated cloud providers to disclose data held on EU servers if the provider meets the jurisdictional trigger. Sandbox participants storing personal data — authorised under Art.68(6) — on US-controlled infrastructure face dual exposure.

Specific CLOUD Act risks in sandbox context:

ScenarioRisk LevelMitigation
Training data on US-controlled cloud during sandboxHigh — CLOUD Act compellability could expose data subjects whose GDPR rights applyUse EU-incorporated cloud providers for sandbox data; document data residency in sandbox plan
NCA documentation submitted through US-controlled platformsMedium — sandbox plan, NCA correspondence exposed to potential CLOUD Act demandUse encrypted, EU-sovereign communication channels for NCA correspondence
Sandbox results and model weights on US infrastructureMedium — model weights derived from EU personal data arguably include derived personal dataEU-based model weight storage; clear data residency plan in sandbox agreement
US government demand during sandbox periodCritical — Art.48 GDPR prohibits transfer unless EU-US legal basis (e.g., Art.702 FISA data vs. CLOUD Act)Maintain legal analysis; NCA notification obligation if CLOUD Act demand received

Practical guidance: document the infrastructure landscape for all sandbox data in the sandbox plan. NCAs increasingly require sandbox applicants to declare data residency and cloud provider jurisdictions. Early declaration enables NCA to condition sandbox authorisation on EU-sovereign storage requirements.


Python SandboxParticipation Implementation

from dataclasses import dataclass, field
from datetime import date, timedelta
from enum import Enum
from typing import Optional

class SandboxStatus(Enum):
    PLANNING = "planning"
    APPLICATION_SUBMITTED = "application_submitted"
    ACCEPTED = "accepted"
    ACTIVE = "active"
    COMPLETED = "completed"
    REJECTED = "rejected"

class SandboxType(Enum):
    NATIONAL = "national"
    CROSS_BORDER = "cross_border"
    GPAI_AI_OFFICE = "gpai_ai_office"

@dataclass
class SandboxParticipation:
    provider_name: str
    ai_system_description: str
    member_state: str
    nca_authority: str
    sandbox_type: SandboxType
    application_date: date
    start_date: Optional[date] = None
    planned_duration_months: int = 12
    status: SandboxStatus = SandboxStatus.PLANNING
    regulatory_questions: list[str] = field(default_factory=list)
    personal_data_processing: bool = False
    cross_border_states: list[str] = field(default_factory=list)
    us_cloud_infrastructure: bool = False

    def expected_completion_date(self) -> Optional[date]:
        if self.start_date is None:
            return None
        return self.start_date + timedelta(days=self.planned_duration_months * 30)

    def exemptions_available(self) -> list[str]:
        exemptions = [
            "Art.43 conformity assessment (deferred to post-sandbox)",
            "Art.11 technical documentation (iterative completion permitted)",
            "Art.49 EU AI database registration (pre-placement only)",
            "Art.47 CE marking (cannot be affixed during sandbox)",
        ]
        if self.sandbox_type == SandboxType.GPAI_AI_OFFICE:
            exemptions.append("Art.52 systemic risk threshold determination (assessment in progress)")
        return exemptions

    def continuing_obligations(self) -> list[str]:
        obligations = [
            "Art.5 prohibited practices — no exemption under any circumstances",
            "Art.65 serious incident reporting to supervising NCA",
            "GDPR compliance for all personal data processing",
            "National product liability law — provider liability continues in full",
            "Art.50 transparency to test subjects about experimental AI system",
            "Active cooperation with NCA supervision obligations",
        ]
        if self.personal_data_processing:
            obligations.append("Art.68(6) NCA authorisation required before personal data processing")
            obligations.append("GDPR Arts.13-14 disclosure to data subjects about sandbox processing")
            obligations.append("Data destruction/anonymisation obligation on sandbox completion")
        return obligations

    def cloud_act_risk_level(self) -> str:
        if not self.us_cloud_infrastructure:
            return "LOW — no US-controlled infrastructure identified"
        if self.personal_data_processing:
            return "HIGH — personal data on US infrastructure: CLOUD Act compellability risk. Document residency in sandbox plan."
        return "MEDIUM — model weights/sandbox documentation on US infrastructure. Assess derived personal data exposure."

    def compliance_readiness_score(self) -> int:
        score = 0
        if self.regulatory_questions:
            score += 20
        if self.start_date is not None:
            score += 15
        if self.status in [SandboxStatus.ACCEPTED, SandboxStatus.ACTIVE]:
            score += 25
        if not self.us_cloud_infrastructure or not self.personal_data_processing:
            score += 20
        if len(self.cross_border_states) > 0 or self.sandbox_type == SandboxType.NATIONAL:
            score += 20
        return min(score, 100)

# Usage example
sandbox = SandboxParticipation(
    provider_name="Example AI Provider GmbH",
    ai_system_description="High-risk AI system: automated CV screening tool (Annex III cat. 4)",
    member_state="Germany",
    nca_authority="Bundesnetzagentur",
    sandbox_type=SandboxType.NATIONAL,
    application_date=date(2026, 3, 1),
    start_date=date(2026, 5, 1),
    planned_duration_months=12,
    status=SandboxStatus.ACTIVE,
    regulatory_questions=[
        "Does automated CV screening constitute 'emotional recognition' under Art.3(34)?",
        "What constitutes 'meaningful human oversight' for Art.14(4) compliance in HR screening?",
        "Is training data representativeness sufficient to meet Art.10(2)(f) data quality requirements?",
    ],
    personal_data_processing=True,
    us_cloud_infrastructure=False,
)

print(f"Completion date: {sandbox.expected_completion_date()}")
print(f"CLOUD Act risk: {sandbox.cloud_act_risk_level()}")
print(f"Compliance readiness: {sandbox.compliance_readiness_score()}/100")
for exemption in sandbox.exemptions_available():
    print(f"  ✓ Exemption: {exemption}")

Art.68 Compliance Checklist

#ItemWhoTiming
1Assess whether your AI system qualifies for sandbox participation: identify the specific regulatory questions your system raises that cannot be resolved through existing guidance or published standards — articulate these questions before applyingProviderBefore application
2Identify the competent NCA for your Member State of intended deployment and review its published sandbox eligibility criteria and application process — do not apply to a sandbox that does not cover your AI system typeProviderBefore application
3Prepare the sandbox plan: define the AI system under testing, the use cases covered, the testing timeline, the data types to be processed, the regulatory questions to be addressed, and the exemptions sought — the sandbox plan is the foundation of the NCA relationshipProviderBefore application
4Assess personal data processing requirements: if sandbox testing requires personal data, prepare an Art.68(6) NCA authorisation request — identify the data categories, source, purpose, legal basis, and subject information approach before submitting to NCAProviderBefore application
5Map your cloud infrastructure for sandbox data: if any US-controlled cloud provider will be used for sandbox data storage or processing, conduct CLOUD Act conflict assessment and document data residency — NCAs increasingly require this disclosure in sandbox applicationsProviderBefore application
6Establish sandbox-specific incident response protocol: Art.65 serious incident reporting continues during sandbox — designate an incident response lead with direct NCA communication authority for immediate reporting of testing incidentsProviderBefore sandbox commencement
7Train technical and legal teams on the continuing obligations that apply during sandbox: Art.5 prohibited practices cannot be tested, liability is not suspended, GDPR applies, test subjects must be informed — sandbox is not a compliance-free zoneProviderBefore sandbox commencement
8Evaluate cross-border sandbox arrangements under Art.68(4) if your intended deployment spans multiple Member States — a joint sandbox aligned across relevant NCAs reduces divergent enforcement risk post-market and the Art.67 escalation exposureProviderBefore application
9Plan the post-sandbox conformity pathway before sandbox commencement: understand what conformity assessment steps remain post-sandbox, which notified body (if required) you will engage, and how the sandbox completion report will be incorporated into your technical documentationProviderBefore sandbox commencement
10Document NCA guidance received during the sandbox period contemporaneously: formal written documentation of NCA instructions, recommendations, and approvals within the sandbox is your best protection in any subsequent enforcement proceeding or conformity assessment challengeProviderThroughout sandbox period

Series Context: Chapter IX Governance and Enforcement Framework

ArticleCoveragePost
Art.57National Competent Authorities — designation, tasks, independenceArt.57 guide
Art.58NCA enforcement powers — investigation, access, corrective measuresArt.58 guide
Art.59AI Board — composition, independence, NCA coordinationArt.59 guide
Art.60EU AI database — public registry, EUID governance, Commission managementArt.60 guide
Art.61Scientific Panel — independent experts, model evaluation, AI Office advisoryArt.61 guide
Art.62AI Office enforcement powers — corrective measures, market withdrawal, emergency actionArt.62 guide
Art.63Advisory Forum — multi-stakeholder consultation, composition, tasks, CoP inputArt.63 guide
Art.64Access to data and documentation — market surveillance authority enforcement powersArt.64 guide
Art.65Reporting of serious incidents — provider NCA notification obligationsArt.65 guide
Art.66Market surveillance, information exchange, enforcement coordinationArt.66 guide
Art.67Union safeguard procedure — Commission review of conflicting NCA enforcementArt.67 guide
Art.68AI regulatory sandboxes — national establishment, provider exemptions, compliance pathwayThis guide
Art.69Codes of conduct — voluntary application of specific requirements beyond mandatory obligationsArt.69 guide

EU AI Act Art.68 analysis based on Regulation (EU) 2024/1689 as published in the Official Journal of the European Union. Applicable from 2 August 2025 per Art.113(3). The AI regulatory sandbox framework described follows the general principles of EU regulatory sandbox design established across financial services, pharmaceutical, and data regulation sectors. Personal data processing provisions under Art.68(6) operate alongside and do not displace GDPR obligations; DPA coordination requirements reflect the principle that no EU regulatory framework creates carve-outs from fundamental rights protections. CLOUD Act risk analysis reflects the state of EU-US data transfer agreements as of 2025; providers should seek current legal advice on applicable transfer mechanisms. This guide reflects the text of the Regulation as enacted and does not constitute legal advice.