EU AI Act Art.68: AI Regulatory Sandboxes — National Establishment Obligations, Provider Exemptions, and Compliance Pathway (2026)
EU AI Act Article 68 is the legislative bridge between the governance and enforcement framework established in Art.57–67 and the market reality that AI systems — particularly novel, high-risk, and general-purpose systems — often cannot be meaningfully assessed for compliance before they have been tested in conditions approaching real-world deployment. Art.68 creates the AI regulatory sandbox: a structured, NCA-supervised testing environment in which providers can develop and test AI systems under regulatory guidance, with partial exemptions from certain AI Act obligations, before the system is placed on the market or put into service.
The regulatory sandbox concept is not new to EU product regulation. Pharmaceutical clinical trials, financial services regulatory sandboxes under FinTech frameworks, and GDPR sandbox arrangements for data-intensive services have established the precedent: controlled experimentation under regulatory supervision, with transparent accountability, produces better compliance outcomes than forcing providers to navigate requirements in isolation. Art.68 applies this logic to AI systems specifically, creating a framework that serves innovation while preserving the risk-proportionate oversight that the AI Act's high-risk classification system demands.
For developers, Art.68 is relevant in two directions. If your system is genuinely novel — a high-risk AI system for which the conformity assessment pathway is unclear, or a GPAI model that raises new questions about systemic risk thresholds — the sandbox offers a structured regulatory dialogue that can resolve compliance questions before market placement. If your system is destined for a Member State that has established a national sandbox, understanding the eligibility criteria and the scope of exemptions available during participation is material to your deployment timeline planning.
Art.68 became applicable on 2 August 2025 as part of the phased entry into force of Regulation (EU) 2024/1689.
Art.68 in the Chapter IX Enforcement Architecture
Art.68 marks a transition within Chapter IX from the reactive enforcement framework (market surveillance, safeguard procedures, incident reporting) to a forward-looking compliance support mechanism:
| Article | Function | Relationship to Art.68 |
|---|---|---|
| Art.57 | NCA designation — the authority that operates the sandbox | NCAs designated under Art.57 are responsible for establishing and operating Art.68 sandboxes |
| Art.58 | NCA investigation powers — the enforcement toolkit | Suspended or modified for participating providers during sandbox period |
| Art.60 | EU AI database — pre-market registration obligation | Art.68 sandbox participation may defer certain registration obligations pending post-sandbox conformity confirmation |
| Art.64 | Market surveillance access to data and documentation | Art.68 creates a cooperative alternative to adversarial Art.64 access — sandbox NCA supervision replaces post-market access demands |
| Art.65 | Serious incident reporting obligations | Continues to apply during sandbox for actual incidents; NCA involvement is direct rather than post-hoc notification |
| Art.66 | Market surveillance and enforcement | Does not apply to sandbox participants for issues arising from tested functionality — protection exists only within scope of sandbox plan |
| Art.67 | Union safeguard procedure — escalation for conflicting NCA measures | Cross-border sandbox arrangements under Art.68(3) reduce the risk of divergent national enforcement by aligning regulatory dialogue upfront |
| Art.68 | AI regulatory sandbox — controlled testing under NCA supervision | This guide |
Art.68(1): Member State Obligation to Establish AI Regulatory Sandboxes
Art.68(1) imposes a mandatory establishment obligation on Member States: each Member State shall ensure that its competent authority establishes at least one AI regulatory sandbox at national level. This is not discretionary — every Member State must have at least one operational sandbox within twelve months of the date on which the relevant Chapter IX provisions become applicable (the primary date being 2 August 2025, with the sandbox establishment obligation flowing from that).
The sandbox must be operational — not merely announced. An operational sandbox means:
- Designated NCA operator: the national competent authority designated under Art.57 is responsible for operating the sandbox, potentially in cooperation with other regulatory bodies (data protection authorities, competition authorities, consumer protection agencies) where the AI system under testing engages their areas of competence
- Published eligibility criteria: criteria for provider participation must be publicly available, enabling providers to self-assess eligibility before applying
- Established testing protocols: structured framework for what testing is supervised, what records are maintained, how testing periods are scoped, and how completion is assessed
- Functioning application process: providers must be able to apply and receive acceptance or rejection decisions within defined timelines
Member States that already operate pre-AI Act innovation sandboxes (for example, financial services regulatory sandbox frameworks in Ireland, the Netherlands, and Germany) are not exempt from the Art.68 obligation — they must either adapt existing frameworks to comply with Art.68's requirements or establish dedicated AI sandboxes.
The AI Board (Art.59) monitors sandbox establishment across Member States and publishes annual reports on sandbox activity. This creates accountability: Member States that fail to meet the establishment timeline face reputational and institutional pressure through the AI Board reporting mechanism.
Art.68(2): Sandbox Participation Criteria and Selection
Art.68(2) establishes the criteria that NCAs must apply when evaluating provider applications to participate in the sandbox. The criteria are designed to ensure that sandbox resources — which are finite, given NCA capacity constraints — are allocated to providers and AI systems where sandbox participation delivers genuine regulatory value.
Eligibility criteria under Art.68(2):
| Criterion | Assessment | Notes |
|---|---|---|
| Novelty of AI system | Does the system raise compliance questions not adequately addressed by existing guidance or conformity assessment standards? | Generic applications of well-understood technology do not qualify — sandbox is for genuinely novel cases |
| Meaningful safety/performance assessment | Can the NCA meaningfully assess the system's safety and performance in controlled conditions? | If safety can only be assessed at full deployment scale, sandbox may be inappropriate |
| Regulatory question identification | Has the provider articulated specific regulatory questions it seeks to resolve through sandbox participation? | Applications must include a structured regulatory question inventory |
| Proportionate scope | Is the testing scope defined and limited — a bounded set of use cases, users, and data types? | Open-ended testing without defined scope is not sandbox participation |
| Provider accountability infrastructure | Does the provider have internal governance structures capable of maintaining sandbox commitments? | Financial resources, technical competency, and incident response capability |
SME and start-up priority: Art.68(2) explicitly requires NCAs to prioritise applications from small and medium-sized enterprises and start-ups, provided they meet the substantive eligibility criteria. This is a deliberate policy choice: SMEs and start-ups face disproportionate compliance costs relative to large providers, and the sandbox is intended to reduce the barrier to regulated market entry for innovation-stage companies that cannot afford lengthy conformity assessment processes.
Application timeline: NCAs must respond to sandbox applications within a defined period (Member States set specific timelines; the AI Board publishes guidance on reasonable application processing windows). Silence is not acceptance — providers must receive explicit acceptance or a reasoned rejection.
Rejection and appeal: Rejected applications must be accompanied by a reasoned decision. Art.68(2) does not create a formal appeal procedure at the EU level, but MS administrative law applies — rejected providers may challenge NCA rejection decisions through national administrative review or judicial proceedings.
Art.68(3): Scope of Provider Exemptions During Sandbox Participation
The regulatory value of sandbox participation derives substantially from the exemptions from certain AI Act obligations that apply while a provider is engaged in NCA-supervised testing. Without meaningful exemptions, sandbox participation would impose compliance costs without the learning benefits that justify the framework.
Art.68(3) defines the scope of these exemptions. They are partial and bounded — not a blanket suspension of the AI Act for sandbox participants.
Exemptions that apply during sandbox participation:
- Conformity assessment obligation (Art.43): providers are not required to complete the full conformity assessment procedure for the AI system under testing during the sandbox period. The sandbox period itself serves as a supervised precursor to conformity assessment, generating the documentation and evidence that feeds the post-sandbox assessment.
- Technical documentation pre-completion (Art.11): technical documentation does not need to be finalised before sandbox commencement. A preliminary documentation framework is required — NCA must be able to assess what the provider is building — but the documentation is completed iteratively during the sandbox period in coordination with NCA feedback.
- EU AI database registration pre-placement (Art.49): the registration obligation applies upon placement on the market. Sandbox testing that does not constitute market placement does not trigger Art.49 registration. The NCA maintains its own sandbox participant register.
- CE marking (Art.47): CE marking cannot be affixed during sandbox testing. The sandbox is explicitly pre-market — CE marking signals conformity for placed products.
Obligations that continue during sandbox participation:
| Obligation | Continues | Rationale |
|---|---|---|
| Fundamental rights safeguards (Art.5) | Yes | Prohibited practices cannot be tested in sandbox — no exemption for biometric categorisation, subliminal manipulation, social scoring |
| Serious incident reporting (Art.65) | Yes, to NCA | Direct NCA oversight replaces post-hoc notification — incidents during sandbox must be reported to supervising NCA immediately |
| GDPR compliance | Yes (with sandbox-specific provisions) | Personal data processed during sandbox remains subject to GDPR — Art.68(6) provides special conditions but does not suspend GDPR |
| Product liability (national law) | Yes | Provider liability for harms caused during testing is not suspended |
| Basic transparency to test subjects (Art.50) | Yes | Individuals interacting with sandboxed AI systems must be informed of the experimental nature of the system |
| NCA cooperation | Yes (intensified) | Sandbox participation requires active cooperation with NCA supervision — documentation, reporting, access obligations |
The sandbox plan: Art.68(3) requires that the scope of available exemptions be specified in a sandbox plan agreed between the provider and the NCA before sandbox commencement. The sandbox plan defines: the AI system under testing, the use cases covered, the testing timeline, the regulatory questions to be addressed, the data types to be processed, the exemptions that apply, and the conditions for NCA oversight. Exemptions outside the sandbox plan's scope do not apply.
Art.68(4): Cross-Border Sandbox Arrangements
Art.68(4) enables two or more Member States to establish joint cross-border sandbox arrangements — a single sandbox framework operating across multiple national jurisdictions simultaneously. This is particularly valuable for:
- AI systems intended for pan-European deployment where different NCAs have different interpretive positions on compliance requirements
- Providers based in one Member State who need to test their system in the context of a different Member State's regulatory environment (e.g., healthcare AI systems where relevant regulations differ by MS)
- GPAI model providers whose systemic risk assessment under Art.52 and Art.55 has EU-level implications beyond any single NCA's jurisdiction
Cross-border sandbox arrangements under Art.68(4) require:
- Bilateral or multilateral NCA agreement: the participating NCAs must agree on which NCA leads the sandbox supervision, how disagreements between NCAs are resolved, and how the sandbox plan is administered
- AI Board notification: cross-border sandboxes must be notified to the AI Board, which uses this information to develop harmonised sandbox guidance
- Commission guidelines: the Commission issues guidelines for cross-border sandbox arrangements to prevent divergent national frameworks from defeating the purpose of EU-level AI Act harmonisation
Relationship to Art.67: Cross-border sandboxes under Art.68(4) reduce the likelihood of Art.67 Union safeguard procedure triggers — by aligning regulatory dialogue across Member States during development and testing, providers and NCAs avoid the scenarios where one Member State's enforcement measure is contested by another.
Art.68(5): Liability and Responsibility Framework During Sandbox Testing
Art.68(5) addresses the liability allocation question that arises uniquely in sandbox contexts: when a provider is operating under NCA supervision with reduced compliance obligations, who bears responsibility for harms caused during testing?
The AI Act's answer is clear: provider liability continues in full during sandbox participation. The sandbox does not transfer liability to the NCA. The NCA's supervisory role is not a guarantee of safety — it is a structured dialogue to identify and mitigate risks. If a sandboxed AI system causes harm to a test subject or third party:
- The provider remains responsible under applicable product liability law (and under the EU AI Liability Directive when in force)
- The NCA's oversight does not constitute endorsement or approval of the system's safety
- The sandbox plan's scope does not limit liability — harms outside the intended test scope are still the provider's responsibility
NCA liability limitation: Art.68(5) also limits NCA liability. An NCA that supervises a sandbox participant is not jointly liable for harms caused by the sandboxed system, provided the NCA operated the sandbox in good faith and within the scope of Art.68's framework. This limitation is necessary to prevent sandbox oversight from becoming an implicit guarantee of safety — NCAs would refuse to operate sandboxes if doing so exposed them to co-liability for provider harms.
Practical liability framework for sandbox participants:
| Risk Type | Responsibility | Mitigation |
|---|---|---|
| Personal injury from AI system error | Provider (full) | Sandbox insurance, limited test subject pool, monitored test conditions |
| Data breach during sandbox testing | Provider (full, under GDPR) | Sandbox-specific data processing agreement, minimised data sets, encryption |
| Discrimination or unfair outcome | Provider (full) | Bias testing protocols as part of sandbox plan, diverse test subject selection |
| NCA guidance that proves incorrect | NCA (limited, good faith) | Document NCA guidance received; demonstrate reliance on formal sandbox plan provisions |
| Third-party harm outside test scope | Provider (full) | Strictly limit real-world exposure to defined sandbox conditions |
Art.68(6): Personal Data Processing in AI Regulatory Sandboxes
Art.68(6) is one of the most technically significant provisions for AI developers, because it directly addresses the tension between GDPR's data minimisation and purpose limitation principles and the training and testing data requirements of AI systems in development.
Training high-quality AI systems requires large, representative datasets. For high-risk AI systems — healthcare diagnostics, creditworthiness assessment, employment screening tools — the most relevant training and testing data is often personal data that would not ordinarily be processable for AI development purposes under GDPR. Art.68(6) creates a controlled mechanism to address this tension within the sandbox.
Art.68(6) special conditions for personal data processing:
- Explicit NCA authorisation: the sandbox plan must explicitly authorise the categories of personal data to be processed, the source of that data, the purpose of processing, and the data subjects whose data will be used. NCA authorisation is required before processing begins — providers cannot self-authorise personal data processing in the sandbox.
- Data minimisation within sandbox: even with NCA authorisation, providers must use the minimum personal data necessary to address the regulatory questions defined in the sandbox plan. This is not a blanket licence — it is a bounded authorisation.
- Purpose limitation preservation: personal data processed under Art.68(6) authorisation cannot be used for any purpose outside the sandbox plan. Data collected for a diagnostic AI sandbox cannot be repurposed for marketing or fed into other AI systems.
- Subject information rights: data subjects whose personal data is used in sandbox testing must be informed in accordance with GDPR Arts. 13-14, with specific disclosure of sandbox processing. Their GDPR rights (access, erasure, restriction) continue to apply.
- Data destruction post-sandbox: personal data processed under Art.68(6) authorisation must be deleted or anonymised upon sandbox completion unless the sandbox plan provides for continued processing under a separate legal basis.
- DPA coordination: where the supervising NCA is not the national data protection authority, Art.68(6) requires the NCA to coordinate with the national DPA before authorising personal data processing in the sandbox. This prevents the sandbox framework from being used as an end-run around GDPR enforcement.
Sandbox data vs. production data: Art.68(6) does not authorise use of live production data from deployed systems for sandbox testing. The provision covers controlled testing data under NCA supervision — not harvesting production data retroactively into a sandbox.
Art.68(7): Post-Sandbox Compliance Pathway
Art.68(7) establishes the post-sandbox pathway — the mechanism by which sandbox participation connects to the conformity assessment and market placement process.
Upon conclusion of the sandbox period, the supervising NCA issues a sandbox completion report documenting:
- The regulatory questions addressed during the sandbox
- The testing methodology and scope
- The NCA's assessment of the provider's conformity progress
- Identified residual compliance gaps requiring resolution before market placement
- Any conditions the NCA recommends for market placement
The sandbox completion report is not a market approval or a conformity declaration. It does not replace the conformity assessment under Art.43. However, the report serves several functions in the post-sandbox compliance process:
Functions of the sandbox completion report:
| Function | How Used | Limitation |
|---|---|---|
| Conformity assessment evidence | Submitted to notified body (where required under Art.43) as evidence of NCA-supervised testing | Notified body makes independent assessment — report is evidence, not approval |
| Technical documentation input | Testing data, NCA feedback, and sandbox period documentation contribute to the Art.11 technical documentation package | Documentation must be completed and finalised post-sandbox |
| Risk management system input | Sandbox-identified risks feed the Art.9 risk management system for the post-market system | Risk management continues; sandbox reduces unknown unknowns |
| NCA relationship foundation | Establishes working relationship with supervising NCA for post-market monitoring under Art.72 | NCAs that supervised sandbox have context for proportionate oversight post-market |
| GPAI systemic risk assessment | For GPAI model providers, sandbox testing with AI Office involvement can inform Art.52 threshold assessment | AI Office involvement in sandbox not guaranteed — depends on model characteristics |
Post-sandbox timeline: there is no mandatory waiting period between sandbox completion and market placement application. Providers can submit conformity assessment documentation immediately following sandbox completion. However, if the sandbox completion report identifies material compliance gaps, providers must address those gaps before proceeding — NCA follow-up assessment may be required.
Art.68(8): Commission Coordination and Harmonised Guidelines
Art.68(8) assigns the Commission a coordination role in the AI regulatory sandbox framework to prevent fragmentation — 27 Member States operating 27 different sandbox frameworks with different criteria, timelines, and exemption scopes would undermine the AI Act's internal market harmonisation objective.
Under Art.68(8), the Commission:
- Issues harmonised guidelines for sandbox criteria, procedures, and exemption scopes — ensuring that a provider accepted into a German sandbox faces criteria broadly comparable to a French or Dutch sandbox
- Coordinates with the AI Board to develop and update sandbox best practices based on operational experience
- Publishes annual reports on sandbox activity across Member States: number of participants, types of AI systems tested, regulatory questions addressed, post-sandbox market placement rates
- Reviews the sandbox framework three years after application date and proposes legislative amendments if the framework requires adjustment
AI Board sandbox function: Art.68(8) gives the AI Board a specific mandate to support sandbox harmonisation — it reviews Member State sandbox frameworks, identifies divergences, and recommends alignment measures. This makes the AI Board a de facto regulator of the regulatory sandboxes themselves.
Art.68(9): Sandbox Framework for GPAI Models and Systemic Risk Assessment
Art.68(9) extends the sandbox framework to general-purpose AI models — specifically addressing the intersection between Art.68 sandboxes and the systemic risk assessment obligations under Art.52 and Art.55.
For GPAI model providers who are uncertain whether their model meets the Art.52 systemic risk thresholds (10^25 FLOPs, or other criteria established by Commission delegated acts), a sandbox arrangement with AI Office involvement can serve as a structured methodology for conducting the systemic risk assessment.
Key differences from high-risk AI system sandboxes under Art.68(1)-(8):
| Dimension | High-Risk AI System Sandbox | GPAI Model Sandbox (Art.68(9)) |
|---|---|---|
| Supervising authority | National competent authority (Art.57) | AI Office (Art.55 enforcement link) |
| Scope of exemptions | Conformity assessment, CE marking, documentation pre-completion | Art.52 threshold determination, AI Office access obligations |
| Cross-border dimension | Available under Art.68(4) | Inherent — GPAI models are Union-wide by nature |
| Post-sandbox outcome | Sandbox completion report + conformity pathway | Systemic risk determination + Art.53 obligation applicability |
| Sandbox plan content | Use case, data types, regulatory questions | Model architecture, training data, capabilities, FLOPs calculation, benchmark results |
CLOUD Act Implications for Sandbox Operations
For AI system providers incorporated in or operationally dependent on US-based infrastructure, Art.68 sandbox operations raise specific CLOUD Act conflict considerations that do not arise for purely EU-based providers.
The core tension: Art.68(6) authorises personal data processing under NCA supervision and GDPR protections. Simultaneously, the US Clarifying Lawful Overseas Use of Data (CLOUD) Act enables US government authorities to compel US-incorporated cloud providers to disclose data held on EU servers if the provider meets the jurisdictional trigger. Sandbox participants storing personal data — authorised under Art.68(6) — on US-controlled infrastructure face dual exposure.
Specific CLOUD Act risks in sandbox context:
| Scenario | Risk Level | Mitigation |
|---|---|---|
| Training data on US-controlled cloud during sandbox | High — CLOUD Act compellability could expose data subjects whose GDPR rights apply | Use EU-incorporated cloud providers for sandbox data; document data residency in sandbox plan |
| NCA documentation submitted through US-controlled platforms | Medium — sandbox plan, NCA correspondence exposed to potential CLOUD Act demand | Use encrypted, EU-sovereign communication channels for NCA correspondence |
| Sandbox results and model weights on US infrastructure | Medium — model weights derived from EU personal data arguably include derived personal data | EU-based model weight storage; clear data residency plan in sandbox agreement |
| US government demand during sandbox period | Critical — Art.48 GDPR prohibits transfer unless EU-US legal basis (e.g., Art.702 FISA data vs. CLOUD Act) | Maintain legal analysis; NCA notification obligation if CLOUD Act demand received |
Practical guidance: document the infrastructure landscape for all sandbox data in the sandbox plan. NCAs increasingly require sandbox applicants to declare data residency and cloud provider jurisdictions. Early declaration enables NCA to condition sandbox authorisation on EU-sovereign storage requirements.
Python SandboxParticipation Implementation
from dataclasses import dataclass, field
from datetime import date, timedelta
from enum import Enum
from typing import Optional
class SandboxStatus(Enum):
PLANNING = "planning"
APPLICATION_SUBMITTED = "application_submitted"
ACCEPTED = "accepted"
ACTIVE = "active"
COMPLETED = "completed"
REJECTED = "rejected"
class SandboxType(Enum):
NATIONAL = "national"
CROSS_BORDER = "cross_border"
GPAI_AI_OFFICE = "gpai_ai_office"
@dataclass
class SandboxParticipation:
provider_name: str
ai_system_description: str
member_state: str
nca_authority: str
sandbox_type: SandboxType
application_date: date
start_date: Optional[date] = None
planned_duration_months: int = 12
status: SandboxStatus = SandboxStatus.PLANNING
regulatory_questions: list[str] = field(default_factory=list)
personal_data_processing: bool = False
cross_border_states: list[str] = field(default_factory=list)
us_cloud_infrastructure: bool = False
def expected_completion_date(self) -> Optional[date]:
if self.start_date is None:
return None
return self.start_date + timedelta(days=self.planned_duration_months * 30)
def exemptions_available(self) -> list[str]:
exemptions = [
"Art.43 conformity assessment (deferred to post-sandbox)",
"Art.11 technical documentation (iterative completion permitted)",
"Art.49 EU AI database registration (pre-placement only)",
"Art.47 CE marking (cannot be affixed during sandbox)",
]
if self.sandbox_type == SandboxType.GPAI_AI_OFFICE:
exemptions.append("Art.52 systemic risk threshold determination (assessment in progress)")
return exemptions
def continuing_obligations(self) -> list[str]:
obligations = [
"Art.5 prohibited practices — no exemption under any circumstances",
"Art.65 serious incident reporting to supervising NCA",
"GDPR compliance for all personal data processing",
"National product liability law — provider liability continues in full",
"Art.50 transparency to test subjects about experimental AI system",
"Active cooperation with NCA supervision obligations",
]
if self.personal_data_processing:
obligations.append("Art.68(6) NCA authorisation required before personal data processing")
obligations.append("GDPR Arts.13-14 disclosure to data subjects about sandbox processing")
obligations.append("Data destruction/anonymisation obligation on sandbox completion")
return obligations
def cloud_act_risk_level(self) -> str:
if not self.us_cloud_infrastructure:
return "LOW — no US-controlled infrastructure identified"
if self.personal_data_processing:
return "HIGH — personal data on US infrastructure: CLOUD Act compellability risk. Document residency in sandbox plan."
return "MEDIUM — model weights/sandbox documentation on US infrastructure. Assess derived personal data exposure."
def compliance_readiness_score(self) -> int:
score = 0
if self.regulatory_questions:
score += 20
if self.start_date is not None:
score += 15
if self.status in [SandboxStatus.ACCEPTED, SandboxStatus.ACTIVE]:
score += 25
if not self.us_cloud_infrastructure or not self.personal_data_processing:
score += 20
if len(self.cross_border_states) > 0 or self.sandbox_type == SandboxType.NATIONAL:
score += 20
return min(score, 100)
# Usage example
sandbox = SandboxParticipation(
provider_name="Example AI Provider GmbH",
ai_system_description="High-risk AI system: automated CV screening tool (Annex III cat. 4)",
member_state="Germany",
nca_authority="Bundesnetzagentur",
sandbox_type=SandboxType.NATIONAL,
application_date=date(2026, 3, 1),
start_date=date(2026, 5, 1),
planned_duration_months=12,
status=SandboxStatus.ACTIVE,
regulatory_questions=[
"Does automated CV screening constitute 'emotional recognition' under Art.3(34)?",
"What constitutes 'meaningful human oversight' for Art.14(4) compliance in HR screening?",
"Is training data representativeness sufficient to meet Art.10(2)(f) data quality requirements?",
],
personal_data_processing=True,
us_cloud_infrastructure=False,
)
print(f"Completion date: {sandbox.expected_completion_date()}")
print(f"CLOUD Act risk: {sandbox.cloud_act_risk_level()}")
print(f"Compliance readiness: {sandbox.compliance_readiness_score()}/100")
for exemption in sandbox.exemptions_available():
print(f" ✓ Exemption: {exemption}")
Art.68 Compliance Checklist
| # | Item | Who | Timing |
|---|---|---|---|
| 1 | Assess whether your AI system qualifies for sandbox participation: identify the specific regulatory questions your system raises that cannot be resolved through existing guidance or published standards — articulate these questions before applying | Provider | Before application |
| 2 | Identify the competent NCA for your Member State of intended deployment and review its published sandbox eligibility criteria and application process — do not apply to a sandbox that does not cover your AI system type | Provider | Before application |
| 3 | Prepare the sandbox plan: define the AI system under testing, the use cases covered, the testing timeline, the data types to be processed, the regulatory questions to be addressed, and the exemptions sought — the sandbox plan is the foundation of the NCA relationship | Provider | Before application |
| 4 | Assess personal data processing requirements: if sandbox testing requires personal data, prepare an Art.68(6) NCA authorisation request — identify the data categories, source, purpose, legal basis, and subject information approach before submitting to NCA | Provider | Before application |
| 5 | Map your cloud infrastructure for sandbox data: if any US-controlled cloud provider will be used for sandbox data storage or processing, conduct CLOUD Act conflict assessment and document data residency — NCAs increasingly require this disclosure in sandbox applications | Provider | Before application |
| 6 | Establish sandbox-specific incident response protocol: Art.65 serious incident reporting continues during sandbox — designate an incident response lead with direct NCA communication authority for immediate reporting of testing incidents | Provider | Before sandbox commencement |
| 7 | Train technical and legal teams on the continuing obligations that apply during sandbox: Art.5 prohibited practices cannot be tested, liability is not suspended, GDPR applies, test subjects must be informed — sandbox is not a compliance-free zone | Provider | Before sandbox commencement |
| 8 | Evaluate cross-border sandbox arrangements under Art.68(4) if your intended deployment spans multiple Member States — a joint sandbox aligned across relevant NCAs reduces divergent enforcement risk post-market and the Art.67 escalation exposure | Provider | Before application |
| 9 | Plan the post-sandbox conformity pathway before sandbox commencement: understand what conformity assessment steps remain post-sandbox, which notified body (if required) you will engage, and how the sandbox completion report will be incorporated into your technical documentation | Provider | Before sandbox commencement |
| 10 | Document NCA guidance received during the sandbox period contemporaneously: formal written documentation of NCA instructions, recommendations, and approvals within the sandbox is your best protection in any subsequent enforcement proceeding or conformity assessment challenge | Provider | Throughout sandbox period |
Series Context: Chapter IX Governance and Enforcement Framework
| Article | Coverage | Post |
|---|---|---|
| Art.57 | National Competent Authorities — designation, tasks, independence | Art.57 guide |
| Art.58 | NCA enforcement powers — investigation, access, corrective measures | Art.58 guide |
| Art.59 | AI Board — composition, independence, NCA coordination | Art.59 guide |
| Art.60 | EU AI database — public registry, EUID governance, Commission management | Art.60 guide |
| Art.61 | Scientific Panel — independent experts, model evaluation, AI Office advisory | Art.61 guide |
| Art.62 | AI Office enforcement powers — corrective measures, market withdrawal, emergency action | Art.62 guide |
| Art.63 | Advisory Forum — multi-stakeholder consultation, composition, tasks, CoP input | Art.63 guide |
| Art.64 | Access to data and documentation — market surveillance authority enforcement powers | Art.64 guide |
| Art.65 | Reporting of serious incidents — provider NCA notification obligations | Art.65 guide |
| Art.66 | Market surveillance, information exchange, enforcement coordination | Art.66 guide |
| Art.67 | Union safeguard procedure — Commission review of conflicting NCA enforcement | Art.67 guide |
| Art.68 | AI regulatory sandboxes — national establishment, provider exemptions, compliance pathway | This guide |
| Art.69 | Codes of conduct — voluntary application of specific requirements beyond mandatory obligations | Art.69 guide |
EU AI Act Art.68 analysis based on Regulation (EU) 2024/1689 as published in the Official Journal of the European Union. Applicable from 2 August 2025 per Art.113(3). The AI regulatory sandbox framework described follows the general principles of EU regulatory sandbox design established across financial services, pharmaceutical, and data regulation sectors. Personal data processing provisions under Art.68(6) operate alongside and do not displace GDPR obligations; DPA coordination requirements reflect the principle that no EU regulatory framework creates carve-outs from fundamental rights protections. CLOUD Act risk analysis reflects the state of EU-US data transfer agreements as of 2025; providers should seek current legal advice on applicable transfer mechanisms. This guide reflects the text of the Regulation as enacted and does not constitute legal advice.