EU AI Act Art.57 AI Regulatory Sandboxes: Innovation-Safe Testing Framework — Developer Guide (2026)
EU AI Act Article 57 creates the AI regulatory sandbox regime — the EU's primary mechanism for letting AI system providers test and develop innovative AI under supervised conditions before market placement, without being subject to the full compliance burden that applies to deployed systems.
Art.57 answers the question every startup building AI faces: can you legally test a high-risk AI system in Europe before it's compliant? The answer under Art.57 is yes — within a structured sandbox framework that provides regulatory safe harbour during development, while preserving supervisory oversight and full liability exposure.
Art.57 became applicable on 2 August 2025 as part of Chapter VI (Measures in Support of Innovation) of the EU AI Act (Regulation (EU) 2024/1689). Member States have until 2 August 2026 to establish operational sandboxes. Understanding Art.57 is essential for:
- AI startups and SMEs seeking innovation support mechanisms that reduce compliance costs during development
- Compliance teams at larger providers who want to test new AI capabilities in a regulated environment before launch
- Deployers who want to verify AI system behaviour under supervisory guidance
- Platform providers whose customers may qualify for or be participating in sandboxes
This article covers Art.57(1)–(14) in full, the sandbox plan architecture, personal data rules in sandboxes, liability during sandbox testing, the good-faith obligation, CLOUD Act jurisdiction risk for sandbox test data, and Python implementation for sandbox eligibility and plan management.
Art.57 in the EU AI Act Chapter Structure
| Chapter | Title | Key Articles |
|---|---|---|
| Chapter I | General Provisions | Art.1–4 |
| Chapter II | Prohibited AI Practices | Art.5 |
| Chapter III | High-Risk AI Systems | Art.6–49 |
| Chapter IV | Transparency (Certain AI) | Art.50 |
| Chapter V | General-Purpose AI Models | Art.51–56 |
| Chapter VI | Measures in Support of Innovation | Art.57–63 |
| Chapter VII | Governance | Art.64–70 |
| Chapter VIII | EU Database | Art.71 |
| Chapter IX | Post-Market Monitoring | Art.72–83 |
| Chapter X | Codes of Conduct | Art.95–97 |
Art.57 is the cornerstone of Chapter VI. Art.58 covers real-world testing outside sandboxes. Art.59 addresses personal data processing for AI innovation. Art.60–63 cover further innovation support measures. Together, Chapter VI creates a regulatory toolkit for reducing innovation friction without compromising safety.
Art.57(1)–(2): Member State Obligation to Establish Sandboxes
Art.57(1): National Sandbox Requirement
Art.57(1) imposes a mandatory obligation on each Member State: their competent authorities must establish at least one AI regulatory sandbox at the national level, which shall be operational by 2 August 2026.
Key structural features:
- Mandatory, not optional: "shall ensure" — not "may establish"
- Deadline: operational by 2 August 2026 (one year after Art.57 applicability)
- Competent authority obligation: the designated national AI supervisory authority is responsible
- At least one: baseline requirement; more can be established
The competent authorities for sandbox operation are typically the same authorities designated for AI Act enforcement under Art.70 — in practice, national data protection authorities, product safety authorities, or newly designated AI supervisory bodies depending on the sector.
Art.57(2): Additional Sandboxes
Art.57(2) allows:
- Regional or local sandboxes: below national level (city, region, sector)
- Joint sandboxes: established jointly by multiple Member States
- Sector-specific sandboxes: targeting health, finance, transport, or other regulated sectors
For developers, joint sandboxes are particularly valuable: a single sandbox application can provide regulatory safe harbour across multiple EU jurisdictions simultaneously.
| Sandbox Type | Geographic Scope | Legal Basis |
|---|---|---|
| National | One Member State | Art.57(1) mandatory |
| Regional/Local | Sub-national | Art.57(2) permissive |
| Joint | Multiple Member States | Art.57(2) permissive |
| Sector-specific | Any geography, one sector | Art.57(2) permissive |
Art.57(3)–(4): What a Regulatory Sandbox Provides
Art.57(3): The Controlled Environment Framework
Art.57(3) defines what an AI regulatory sandbox must provide: a controlled environment that facilitates the development, training, testing and validation of innovative AI systems for a limited time before their placing on the market or putting into service.
Four key constraints define the sandbox scope:
1. Controlled environment: not unrestricted testing — the environment is supervised and bounded by the sandbox plan
2. Lifecycle coverage: development, training, testing, and validation are all in scope — the sandbox can be used at any pre-market stage of AI development
3. Innovative AI systems: the eligibility criterion is innovation, not specific technical characteristics. AI systems that would qualify as high-risk under Art.6 and Annex III are the primary sandbox target, but Art.57(3) does not restrict to high-risk systems only
4. Limited time: sandboxes operate under a time-bounded sandbox plan — there is no permanent or indefinite sandbox participation
Sandbox plan requirement: Art.57(3) establishes that sandbox participation operates pursuant to a specific sandbox plan agreed between the prospective provider and the relevant competent authority. The sandbox plan is the central legal document of the sandbox relationship.
Art.57(4): Real-World Testing Within Sandboxes
Art.57(4) expressly permits testing in real-world conditions supervised within the AI regulatory sandbox. This is a significant provision: it allows sandbox participants to move beyond pure laboratory testing and interact with actual users, data, or systems — but under supervisory oversight within the sandbox framework.
Real-world testing under Art.57(4) is distinct from Art.58, which covers real-world testing outside sandboxes in public spaces. Art.57(4) testing remains within the sandbox framework; Art.58 testing operates under a different set of conditions that includes additional safeguards for affected persons.
Art.57(5): Good-Faith Obligation of Prospective Providers
Art.57(5) imposes a good-faith obligation on sandbox participants. Prospective providers must:
- Conduct testing subject to the conditions and limitations set out in the sandbox plan
- Interact in good faith with the competent authority operating the sandbox
What Good Faith Requires in Practice
The good-faith obligation has concrete operational implications:
| Obligation | Implementation |
|---|---|
| Follow sandbox plan conditions | Do not exceed agreed testing scope, user population, or data volumes |
| Report unexpected risks | Proactively notify supervisor of identified risks, not just scheduled reports |
| Respond to supervisory queries | Cooperate with all authority requests; no obstruction |
| No gaming the sandbox | Sandbox is not a mechanism to avoid compliance — it is a supervised path toward it |
| Correct failures | Implement mitigation when risks are identified; do not suppress negative results |
Enforcement consequence: if the competent authority determines a prospective provider is not acting in good faith, the sandbox can be terminated. Termination eliminates the regulatory safe harbour for any testing conducted after the good-faith obligation was breached.
Art.57(6): Supervisory Guidance and Support
Art.57(6) defines the competent authority's obligations within the sandbox — not just oversight powers, but active support functions:
- Guidance: help prospective providers understand regulatory requirements
- Supervision: monitor testing for risk identification
- Support: assist with identifying failures, effective risk mitigation measures, and understanding obligations under the AI Act
This support function is particularly valuable for startups and SMEs that lack dedicated legal and compliance teams. The competent authority in a well-functioning sandbox operates as a regulatory partner during development, not just a post-market enforcement body.
Risk Identification Mandate
Art.57(6) specifically lists fundamental rights, health and safety as risk dimensions the authority must monitor. The competent authority must identify:
- Fundamental rights risks (non-discrimination, privacy, due process)
- Health and safety risks
- Testing failures and their root causes
- Effective mitigation measures for identified risks
Art.57(7)–(8): Rules Publication and SME Priority Access
Art.57(7): Sandbox Rules Publication Deadline
Art.57(7) requires competent authorities to publish AI regulatory sandbox rules by 2 August 2026. The rules must specify:
- Eligibility criteria for sandbox participation
- Application procedure
- Duration of sandbox participation
- Conditions and limitations during testing
- Reporting requirements for sandbox participants
- Grounds for sandbox termination
Authorities must also inform the AI Office and the Board (Art.65-66) about sandbox establishment and outcomes. This creates a Union-level coordination and learning mechanism.
Art.57(8): SME and Start-Up Priority Access
Art.57(8) creates a mandatory priority access right for SMEs, including start-ups. This is not a best-efforts provision — "shall have priority access" is a binding obligation on competent authorities.
Priority access means:
- Fast-track application processing for eligible SMEs/startups
- Dedicated sandbox capacity reserved for smaller innovators
- Reduced or zero fees (Member States may determine this in their rules)
- Simplified sandbox plan requirements proportionate to the scale of the SME
SME definition: the EU AI Act uses the standard EU definition (Commission Recommendation 2003/361/EC): fewer than 250 employees and annual turnover ≤ EUR 50 million or balance sheet ≤ EUR 43 million.
| Eligibility Category | Priority | Application Track |
|---|---|---|
| Start-ups (< 3 years, any size) | Highest | Fast-track |
| Micro enterprises (< 10 FTE, ≤ EUR 2M) | Highest | Fast-track |
| Small enterprises (< 50 FTE, ≤ EUR 10M) | High | Expedited |
| Medium enterprises (< 250 FTE, ≤ EUR 50M) | Standard priority | Standard |
| Large enterprises | No priority | Standard |
Art.57(9): Preservation of Supervisory Powers
Art.57(9) is a critical constraint: sandbox participation does not prejudice the supervisory and corrective powers of the competent authorities.
The sandbox is not an immunity zone. If the competent authority identifies significant risks to health, safety, or fundamental rights during sandbox testing, it must require:
- Adequate mitigation: proportionate corrective measures to address the identified risk
- Suspension: if mitigation is not achievable, the development and testing process must be suspended
The suspension power is unconditional — if a sandbox participant cannot mitigate an identified fundamental rights or safety risk, the sandbox must stop. There is no "accept the risk and continue" pathway within a regulatory sandbox.
What Art.57(9) Means for Sandbox Planning
Developers should build their sandbox plan with Art.57(9) in mind:
- Define risk thresholds for autonomous suspension: what internal findings would trigger voluntary suspension before the authority acts?
- Build real-time risk monitoring into sandbox testing infrastructure
- Maintain suspension-ready architecture: can testing stop cleanly if required?
- Document all risk findings with timestamps — authorities will review this record
Art.57(10): Personal Data Processing in Sandboxes
Art.57(10) is a critical provision for AI development: it addresses personal data processing in the sandbox context in a way that creates important permissions and constraints.
The Permission
The competent authority shall ensure that prospective providers of high-risk AI systems can test with personal data that is collected for the original purpose under the respective legal basis for processing.
This provision enables AI developers to use existing datasets for sandbox testing without requiring a new legal basis for each new AI system being developed — provided the data was originally collected under a valid GDPR legal basis and the sandbox testing falls within the original purpose.
The Constraint
Art.57(10) imposes a strict constraint: data used in sandbox testing shall not be used for training and validating other AI systems. The data use is sandboxed within the specific AI system being tested under the sandbox plan.
GDPR × Art.57(10) Interaction
| GDPR Element | Art.57(10) Effect |
|---|---|
| Legal basis for processing | Original legal basis carries over to sandbox testing |
| Purpose limitation (Art.5(1)(b) GDPR) | Sandbox testing within original purpose = permitted |
| Data minimisation (Art.5(1)(c)) | Sandbox plan should specify minimum necessary data scope |
| Storage limitation (Art.5(1)(e)) | Sandbox data retention schedule required |
| Art.9 special categories | Require explicit analysis — sandbox does not override Art.9 |
| Cross-border transfers | Adequacy/SCC obligations still apply |
Practical Implication: No Universal Data Permission
Art.57(10) does not create a blanket data processing permission. It does not override:
- GDPR Article 9 restrictions on special categories (biometric, health, genetic data)
- The original legal basis — if the original collection was consent-based, processing under Art.57(10) may still require reassessing the consent scope
- Data subject rights — individuals retain their Art.15–22 GDPR rights during sandbox testing
Art.57(11): Continued Liability During Sandbox Testing
Art.57(9) preserves supervisory powers; Art.57(11) (by established interpretation) preserves civil liability. Prospective providers participating in the AI regulatory sandbox remain liable under applicable EU and Member State law on liability for any damage inflicted on third parties as a result of experimentation in the sandbox.
This means:
- The EU AI Liability Directive framework applies to sandbox testing damage
- Product Liability Directive applies where the AI system is an embedded component
- GDPR Art.82 non-material damage liability applies to data processing harms
- Sandbox participation cannot be used as a defence against third-party damage claims
Developer Implication: Insurance and Indemnification
For sandbox participants conducting real-world testing under Art.57(4):
- Professional indemnity insurance should cover sandbox testing activities
- Third-party testing agreements should be clear on liability allocation
- Sandbox plan should document the liability framework agreed with the competent authority
Art.57(12)–(14): Union-Level Coordination
Art.57(12): Learning Loop for AI Act Guidance
Art.57(12) creates a feedback mechanism: sandbox results shall be relevant for the purposes of guidance and technical support under Art.96 and Art.97 (SME implementation guidance and Commission guidance).
This means sandbox outcomes directly inform the guidance that the Commission and AI Office publish. Sandbox participants who identify novel compliance challenges contribute to the regulatory learning process.
Art.57(13)–(14): AI Office and Board Coordination
The AI Office and the Board (Art.65-66) must:
- Support and coordinate Member State sandboxes
- Facilitate exchange of information on sandboxes across Member States
- Promote best practice sharing
For developers, this coordination matters: a sandbox experience in Germany can inform how France or Spain approach equivalent testing. Joint sandboxes under Art.57(2) can directly leverage this coordination.
The Sandbox Plan: Architecture and Required Content
The sandbox plan is the central legal document of the Art.57 regime. It is agreed between the prospective provider and the competent authority before sandbox access is granted.
Sandbox Plan Required Elements
Based on Art.57(3)–(9) requirements, a compliant sandbox plan must address:
| Element | Description |
|---|---|
| AI system description | Technical description of the AI system under development |
| Development stage | Which phase of development/training/testing/validation |
| Testing scope | User population, data volumes, geographic scope |
| Duration | Start date, end date, review milestones |
| Risk identification protocol | How risks will be identified and reported to authority |
| Suspension criteria | Conditions under which testing stops (Art.57(9)) |
| Personal data processing | Data sources, legal basis, Art.57(10) constraints |
| Real-world testing conditions | If applicable under Art.57(4): supervised conditions |
| Good-faith obligations | Specific reporting schedules and authority interaction |
| Liability framework | Insurance, indemnification, third-party agreements |
| Post-sandbox compliance pathway | How the provider will achieve full compliance before market placement |
Post-Sandbox Compliance: What Sandbox Does Not Do
Sandbox participation does not create a presumption of compliance for the deployed system. When a sandbox participant exits the sandbox and places the AI system on the market, full compliance with all applicable obligations is required:
- Art.9: Risk management system
- Art.10: Data governance
- Art.13: Transparency
- Art.14: Human oversight
- Art.17: Quality management system
- Art.22: Provider registration in EUAIDB (Art.71)
- Art.43: Conformity assessment
- Art.48: Declaration of conformity
- Art.49: CE marking
The sandbox de-risks development by providing guided compliance preparation — but exit from the sandbox means full compliance requirements activate.
CLOUD Act Jurisdiction Risk in AI Regulatory Sandboxes
For AI developers using US-headquartered cloud infrastructure for sandbox testing, the CLOUD Act creates a significant risk that is not addressed by the sandbox framework itself.
The Risk Scenario
Under the US CLOUD Act (18 U.S.C. § 2713), US cloud providers are obligated to produce data stored anywhere in the world when served with a lawful US court order or warrant. A sandbox participant testing an AI system on AWS, Azure, or Google Cloud stores:
- Training data used in sandbox development
- Test outputs and model evaluations
- Fundamental rights risk assessments
- Sandbox plan documentation and authority correspondence
- Prototype model weights and architectures
All of this is potentially subject to CLOUD Act compellability — even if the sandbox participant is a European SME, even if the sandbox is EU-regulatory-authority-supervised, and even if the data relates to a development-stage AI system that has not yet been placed on the market.
The Dual-Compellability Problem
| Document Type | EU/MSA Access | US CLOUD Act Risk |
|---|---|---|
| Sandbox plan | Art.57(6) supervisory access | Compellable if stored US cloud |
| Training data | Art.57(10) controlled access | Compellable if stored US cloud |
| Risk assessments | Art.57(6) supervision | Compellable if stored US cloud |
| Test results and model outputs | Sandbox plan terms | Compellable if stored US cloud |
| Authority correspondence | Sandbox relationship | Compellable if stored US cloud |
Mitigation: EU-Sovereign Sandbox Infrastructure
For sandbox participants who need single-jurisdiction data governance:
- Store all sandbox testing data and documentation on EU-sovereign infrastructure
- Prefer platforms with EU legal establishment and no US parent subject to CLOUD Act
- Document the infrastructure jurisdiction decision in the sandbox plan as a risk mitigation measure
- Include an infrastructure jurisdiction assessment in the Art.9 risk file for the system under development
Deploying sandbox infrastructure on sota.io — an EU-established platform with Germany datacenter and no US data transfer requirements — ensures that sandbox test data, risk assessments, and authority correspondence remain in a single EU legal order.
Python Implementation: Sandbox Eligibility and Plan Management
from dataclasses import dataclass, field
from datetime import date, timedelta
from typing import Optional
from enum import Enum
class SandboxType(Enum):
NATIONAL = "national"
REGIONAL = "regional"
JOINT_CROSS_BORDER = "joint_cross_border"
SECTOR_SPECIFIC = "sector_specific"
class ParticipantCategory(Enum):
STARTUP = "startup" # < 3 years old, any size
MICRO = "micro" # < 10 FTE, ≤ EUR 2M turnover
SMALL = "small" # < 50 FTE, ≤ EUR 10M turnover
MEDIUM = "medium" # < 250 FTE, ≤ EUR 50M turnover
LARGE = "large" # Does not qualify as SME
class SandboxStage(Enum):
DEVELOPMENT = "development"
TRAINING = "training"
TESTING = "testing"
VALIDATION = "validation"
class RiskLevel(Enum):
LOW = "low"
MEDIUM = "medium"
HIGH = "high"
CRITICAL = "critical" # Triggers Art.57(9) suspension review
@dataclass
class SandboxEligibilityAssessment:
"""
Art.57(3)(8): Assess eligibility for AI regulatory sandbox participation.
SMEs and start-ups have priority access under Art.57(8).
"""
company_name: str
employees_fte: int
annual_turnover_eur: float
company_age_years: float
ai_system_description: str
target_sandbox_country: str # ISO 3166-1 alpha-2
@property
def participant_category(self) -> ParticipantCategory:
if self.company_age_years < 3:
return ParticipantCategory.STARTUP
if self.employees_fte < 10 and self.annual_turnover_eur <= 2_000_000:
return ParticipantCategory.MICRO
if self.employees_fte < 50 and self.annual_turnover_eur <= 10_000_000:
return ParticipantCategory.SMALL
if self.employees_fte < 250 and self.annual_turnover_eur <= 50_000_000:
return ParticipantCategory.MEDIUM
return ParticipantCategory.LARGE
@property
def has_priority_access(self) -> bool:
"""Art.57(8): SMEs including start-ups shall have priority access."""
return self.participant_category in {
ParticipantCategory.STARTUP,
ParticipantCategory.MICRO,
ParticipantCategory.SMALL,
ParticipantCategory.MEDIUM,
}
@property
def priority_tier(self) -> str:
mapping = {
ParticipantCategory.STARTUP: "highest",
ParticipantCategory.MICRO: "highest",
ParticipantCategory.SMALL: "high",
ParticipantCategory.MEDIUM: "standard_priority",
ParticipantCategory.LARGE: "no_priority",
}
return mapping[self.participant_category]
def eligibility_report(self) -> dict:
return {
"company": self.company_name,
"category": self.participant_category.value,
"priority_access": self.has_priority_access,
"priority_tier": self.priority_tier,
"eligible": True, # All innovators may apply; priority determines track
"fast_track": self.participant_category in {
ParticipantCategory.STARTUP,
ParticipantCategory.MICRO,
},
}
@dataclass
class PersonalDataSandboxPlan:
"""
Art.57(10): Personal data processing rules in the AI regulatory sandbox.
Data collected for original purpose may be used; must not train other AI systems.
"""
data_sources: list[str]
original_legal_basis: str # GDPR Art.6 or Art.9 legal basis
original_collection_purpose: str
sandbox_testing_purpose: str
is_within_original_purpose: bool # Must be True for Art.57(10) to apply
special_categories_involved: bool # Art.9 GDPR — requires additional analysis
cross_border_transfer: bool
def validate(self) -> list[str]:
"""Return list of compliance issues."""
issues = []
if not self.is_within_original_purpose:
issues.append(
"Art.57(10) permission requires sandbox testing to be within "
"the original collection purpose. Establish new legal basis or "
"adjust testing scope."
)
if self.special_categories_involved:
issues.append(
"Art.9 GDPR special category data requires explicit legal basis "
"analysis beyond Art.57(10) permission. Art.57(10) does not "
"override Art.9 restrictions."
)
if self.cross_border_transfer:
issues.append(
"Cross-border data transfer: GDPR Chapter V obligations still "
"apply. Adequacy decision or SCC required even in sandbox."
)
return issues
@dataclass
class SandboxSuspensionCriteria:
"""
Art.57(9): Conditions under which sandbox testing must be suspended.
Significant risks to health, safety, or fundamental rights that cannot be mitigated.
"""
fundamental_rights_risk_threshold: RiskLevel = RiskLevel.HIGH
health_safety_risk_threshold: RiskLevel = RiskLevel.HIGH
unmitigable_risk_suspension: bool = True # Always true per Art.57(9)
def requires_suspension(self, identified_risk: RiskLevel, mitigation_available: bool) -> bool:
"""
Art.57(9): If significant risk identified and mitigation not achievable,
development and testing process must be suspended.
"""
is_significant = identified_risk in {RiskLevel.HIGH, RiskLevel.CRITICAL}
if is_significant and not mitigation_available:
return True
return False
@dataclass
class SandboxPlan:
"""
Art.57(3): Formal sandbox plan agreed between prospective provider and
competent authority. Central legal document of the sandbox relationship.
"""
# Parties
provider_name: str
competent_authority: str
sandbox_type: SandboxType
member_state: str
# AI System
ai_system_name: str
ai_system_description: str
development_stage: SandboxStage
is_high_risk_candidate: bool # Will be high-risk under Art.6/Annex III
# Duration (Art.57(3): limited time)
start_date: date
end_date: date
review_milestones: list[date] = field(default_factory=list)
# Scope (Art.57(3))
testing_scope_description: str = ""
max_users_in_testing: Optional[int] = None
real_world_testing: bool = False # Art.57(4)
# Participant
eligibility: Optional[SandboxEligibilityAssessment] = None
# Data (Art.57(10))
personal_data_plan: Optional[PersonalDataSandboxPlan] = None
# Safety (Art.57(9))
suspension_criteria: SandboxSuspensionCriteria = field(
default_factory=SandboxSuspensionCriteria
)
# Infrastructure
infrastructure_jurisdiction: str = "" # Should be "EU" for CLOUD Act mitigation
infrastructure_provider: str = ""
# Post-sandbox path
post_sandbox_compliance_articles: list[str] = field(
default_factory=lambda: [
"Art.9 (Risk Management)",
"Art.10 (Data Governance)",
"Art.13 (Transparency)",
"Art.14 (Human Oversight)",
"Art.17 (QMS)",
"Art.22 (Registration EUAIDB)",
"Art.43 (Conformity Assessment)",
"Art.48 (Declaration of Conformity)",
"Art.49 (CE Marking)",
]
)
@property
def duration_days(self) -> int:
return (self.end_date - self.start_date).days
@property
def cloud_act_risk(self) -> str:
if self.infrastructure_jurisdiction.upper() != "EU":
return (
f"HIGH: {self.infrastructure_provider} infrastructure outside EU. "
"Sandbox test data may be subject to CLOUD Act compellability."
)
return "LOW: EU-sovereign infrastructure. Single EU legal order."
def validate(self) -> list[str]:
issues = []
if self.duration_days <= 0:
issues.append("Sandbox end date must be after start date.")
if self.duration_days > 730:
issues.append(
"Sandbox duration exceeds 2 years. Competent authorities may "
"require justification for extended sandbox access."
)
if not self.review_milestones:
issues.append(
"No review milestones defined. Art.57(6) supervision requires "
"regular interaction — define milestone check-in dates."
)
if self.personal_data_plan:
data_issues = self.personal_data_plan.validate()
issues.extend(data_issues)
if self.infrastructure_jurisdiction.upper() != "EU":
issues.append(
f"Cloud Act Risk: infrastructure in {self.infrastructure_jurisdiction}. "
"Consider EU-sovereign infrastructure for single-regime data governance."
)
return issues
def to_summary(self) -> dict:
return {
"provider": self.provider_name,
"authority": self.competent_authority,
"sandbox_type": self.sandbox_type.value,
"ai_system": self.ai_system_name,
"stage": self.development_stage.value,
"duration_days": self.duration_days,
"start": str(self.start_date),
"end": str(self.end_date),
"real_world_testing": self.real_world_testing,
"sme_priority": self.eligibility.has_priority_access if self.eligibility else None,
"cloud_act_risk": self.cloud_act_risk,
"validation_issues": self.validate(),
}
# --- Usage Example ---
def build_sandbox_plan_example() -> SandboxPlan:
"""
Example: EU startup testing a high-risk AI recruitment system
in a national sandbox before market placement.
"""
eligibility = SandboxEligibilityAssessment(
company_name="RecruiterAI GmbH",
employees_fte=12,
annual_turnover_eur=800_000,
company_age_years=1.5,
ai_system_description="CV screening AI with bias detection for financial sector recruitment",
target_sandbox_country="DE",
)
data_plan = PersonalDataSandboxPlan(
data_sources=["HR partner historical CV dataset (anonymised)", "Synthetic CV generator"],
original_legal_basis="Art.6(1)(f) GDPR legitimate interests — HR analytics",
original_collection_purpose="HR analytics and workforce planning",
sandbox_testing_purpose="Training and validating AI recruitment screening model",
is_within_original_purpose=True,
special_categories_involved=False,
cross_border_transfer=False,
)
plan = SandboxPlan(
provider_name="RecruiterAI GmbH",
competent_authority="Bundesnetzagentur AI Regulatory Sandbox (DE)",
sandbox_type=SandboxType.NATIONAL,
member_state="DE",
ai_system_name="RecruiterAI v0.9",
ai_system_description=(
"AI system for CV screening in financial sector recruitment. "
"Annex III 1(a) candidate — employment decision assistance system."
),
development_stage=SandboxStage.TESTING,
is_high_risk_candidate=True, # Annex III 1(a): employment decisions
start_date=date(2026, 9, 1),
end_date=date(2027, 3, 1),
review_milestones=[
date(2026, 10, 1),
date(2026, 12, 1),
date(2027, 2, 1),
],
testing_scope_description="Testing with 50 HR professionals at 3 financial sector partner firms",
max_users_in_testing=50,
real_world_testing=True, # Art.57(4)
eligibility=eligibility,
personal_data_plan=data_plan,
infrastructure_jurisdiction="EU",
infrastructure_provider="sota.io (Frankfurt, Germany)",
)
return plan
if __name__ == "__main__":
plan = build_sandbox_plan_example()
print(f"Sandbox Plan Summary:")
import json
print(json.dumps(plan.to_summary(), indent=2, default=str))
eligibility_report = plan.eligibility.eligibility_report()
print(f"\nEligibility Assessment:")
print(json.dumps(eligibility_report, indent=2))
40-Item Compliance Checklist: Art.57 AI Regulatory Sandbox
Sandbox Eligibility and Application
- 1. Identified competent authority responsible for AI regulatory sandbox in target Member State
- 2. Confirmed sandbox is operational (Art.57(1) deadline: 2 August 2026)
- 3. Reviewed published sandbox rules under Art.57(7) — rules published by 2 August 2026
- 4. Assessed participant category: startup, micro, small, medium, or large enterprise
- 5. SMEs/startups: applied for priority access track under Art.57(8)
- 6. Confirmed AI system under development is innovative and pre-market
Sandbox Plan Development
- 7. Drafted sandbox plan describing AI system (technical description, intended purpose)
- 8. Defined development stage covered by sandbox plan (development/training/testing/validation)
- 9. Specified sandbox duration (start date, end date) — must be limited time per Art.57(3)
- 10. Defined testing scope: user population, data volumes, geographic scope
- 11. Established review milestones for authority interaction under Art.57(6)
- 12. Agreed sandbox plan with competent authority before initiating sandbox activities
Good-Faith Obligation (Art.57(5))
- 13. Defined internal reporting schedule for proactive risk communication to authority
- 14. Documented good-faith compliance protocol: conditions under which risks are reported
- 15. Established authority communication channel and designated sandbox contact person
- 16. Confirmed all personnel involved in sandbox testing understand good-faith obligations
Real-World Testing (Art.57(4))
- 17. If real-world testing: defined supervised conditions agreed with competent authority
- 18. If real-world testing: documented participant information and consent procedures
- 19. Real-world testing scope bounded by sandbox plan — not exceeding agreed limits
- 20. Monitoring in place to detect risks arising from real-world testing in real time
Personal Data Processing (Art.57(10))
- 21. Identified all personal data sources used in sandbox testing
- 22. Confirmed original legal basis for data collection under GDPR Art.6 or Art.9
- 23. Verified sandbox testing purpose is within original collection purpose (Art.57(10))
- 24. Implemented data isolation: sandbox data not used to train or validate other AI systems
- 25. Art.9 special categories: separate legal basis analysis conducted if applicable
- 26. Cross-border transfer obligations assessed if sandbox data crosses EEA borders
- 27. Data subject rights procedures in place for individuals whose data is used in sandbox
Risk Management and Suspension (Art.57(9))
- 28. Defined internal risk threshold criteria for Art.57(9) suspension review
- 29. Established risk monitoring architecture in sandbox testing infrastructure
- 30. Documented suspension protocol: who decides, how testing stops, authority notification
- 31. Fundamental rights risk assessment conducted for AI system under development
- 32. Health and safety risk assessment conducted for AI system under development
- 33. Mitigation measures for identified risks documented before sandbox testing begins
Liability and Insurance (Art.57 × EU AI Liability Framework)
- 34. Professional indemnity insurance covers sandbox testing activities
- 35. Third-party testing agreements include liability allocation clauses
- 36. Internal liability tracking for any damage incidents during sandbox testing
Infrastructure and CLOUD Act
- 37. Sandbox infrastructure jurisdiction documented (EU vs non-EU)
- 38. If non-EU infrastructure: CLOUD Act risk assessed and documented in sandbox plan
- 39. Sandbox test data, model outputs, and authority correspondence stored in documented jurisdiction
Post-Sandbox Compliance Pathway
- 40. Defined post-sandbox compliance pathway: which Art.9-49 obligations will be addressed and when before market placement
Art.57 Cross-Article Matrix
| Article | Obligation | Art.57 Interaction |
|---|---|---|
| Art.9 | Risk management system | Sandbox de-risks Art.9 compliance — risk identification under supervisory guidance |
| Art.10 | Data governance | Art.57(10) personal data permission limited to original purpose — feeds Art.10 data documentation |
| Art.13 | Transparency | Not suspended in sandbox — sandbox plan should address transparency obligations |
| Art.14 | Human oversight | Not suspended in sandbox — human oversight tested in sandbox environment |
| Art.17 | Quality management | Sandbox plan is a precursor to the QMS — structured development process |
| Art.22 | EU database registration | Required upon market placement — sandbox does not create registration obligation |
| Art.43 | Conformity assessment | Required post-sandbox — sandbox prepares but does not substitute |
| Art.48 | Declaration of conformity | Required post-sandbox — sandbox findings inform DoC documentation basis |
| Art.57(4) | Real-world testing | Distinct from Art.58 (testing outside sandbox) — different conditions apply |
| Art.58 | Testing outside sandbox | Art.57 sandbox = supervised controlled environment; Art.58 = broader real-world conditions |
| Art.59 | Personal data for AI development | Art.57(10) addresses sandbox-specific data rules; Art.59 addresses broader public interest processing |
| Art.71 | EUAIDB registration | Post-sandbox obligation — sandbox participation does not require registration |
| Art.96 | SME implementation guidance | Art.57(12): sandbox results inform guidance development |
| Art.97 | Commission technical guidance | Art.57(12): sandbox results inform Commission guidance under Art.97 |
EU-Native Infrastructure and Art.57
For AI developers building in EU-sovereign infrastructure during sandbox participation, Art.57 creates a three-layer compliance architecture:
Layer 1: Single-Regime Data Governance
When sandbox testing infrastructure is EU-sovereign:
- All training data, test outputs, and model evaluations remain in a single EU legal order
- Art.57(10) personal data constraints are enforced within an EU legal framework
- Authority correspondence and sandbox plan documentation are not subject to CLOUD Act
- Art.70 confidentiality obligations for sandbox information can be maintained without jurisdictional conflict
Layer 2: Clean Post-Sandbox Compliance Path
The Art.9 risk file developed during sandbox testing can state:
- Infrastructure jurisdiction: EU
- CLOUD Act risk: LOW (no US parent, no US subprocessors for sandbox data)
- Data protection regime: single (GDPR), no conflicting legal orders
When the AI system exits the sandbox and enters market placement, this clean infrastructure record supports the Art.17 QMS documentation and the Art.48 Declaration of Conformity.
Layer 3: Authority Trust and Good-Faith Compliance
EU-sovereign infrastructure for sandbox testing demonstrates good-faith compliance (Art.57(5)) in a concrete way: the provider can show that all sandbox data remains within the supervisory jurisdiction of the competent authority without extraterritorial access risk.
See Also
- EU AI Act Art.58: Real-World Testing Outside AI Regulatory Sandboxes — Developer Guide
- EU AI Act Art.9: Risk Management System for High-Risk AI — Developer Guide
- EU AI Act Art.43: Conformity Assessment for High-Risk AI Systems — Developer Guide
- EU AI Act Art.71: EU Database for High-Risk AI Systems (EUAIDB) — Developer Guide
- EU AI Act Art.56: GPAI Code of Practice — Systemic Risk Compliance Pathway Developer Guide