2026-04-16·12 min read·

EU AI Act Art.57 AI Regulatory Sandboxes: Innovation-Safe Testing Framework — Developer Guide (2026)

EU AI Act Article 57 creates the AI regulatory sandbox regime — the EU's primary mechanism for letting AI system providers test and develop innovative AI under supervised conditions before market placement, without being subject to the full compliance burden that applies to deployed systems.

Art.57 answers the question every startup building AI faces: can you legally test a high-risk AI system in Europe before it's compliant? The answer under Art.57 is yes — within a structured sandbox framework that provides regulatory safe harbour during development, while preserving supervisory oversight and full liability exposure.

Art.57 became applicable on 2 August 2025 as part of Chapter VI (Measures in Support of Innovation) of the EU AI Act (Regulation (EU) 2024/1689). Member States have until 2 August 2026 to establish operational sandboxes. Understanding Art.57 is essential for:

This article covers Art.57(1)–(14) in full, the sandbox plan architecture, personal data rules in sandboxes, liability during sandbox testing, the good-faith obligation, CLOUD Act jurisdiction risk for sandbox test data, and Python implementation for sandbox eligibility and plan management.


Art.57 in the EU AI Act Chapter Structure

ChapterTitleKey Articles
Chapter IGeneral ProvisionsArt.1–4
Chapter IIProhibited AI PracticesArt.5
Chapter IIIHigh-Risk AI SystemsArt.6–49
Chapter IVTransparency (Certain AI)Art.50
Chapter VGeneral-Purpose AI ModelsArt.51–56
Chapter VIMeasures in Support of InnovationArt.57–63
Chapter VIIGovernanceArt.64–70
Chapter VIIIEU DatabaseArt.71
Chapter IXPost-Market MonitoringArt.72–83
Chapter XCodes of ConductArt.95–97

Art.57 is the cornerstone of Chapter VI. Art.58 covers real-world testing outside sandboxes. Art.59 addresses personal data processing for AI innovation. Art.60–63 cover further innovation support measures. Together, Chapter VI creates a regulatory toolkit for reducing innovation friction without compromising safety.


Art.57(1)–(2): Member State Obligation to Establish Sandboxes

Art.57(1): National Sandbox Requirement

Art.57(1) imposes a mandatory obligation on each Member State: their competent authorities must establish at least one AI regulatory sandbox at the national level, which shall be operational by 2 August 2026.

Key structural features:

The competent authorities for sandbox operation are typically the same authorities designated for AI Act enforcement under Art.70 — in practice, national data protection authorities, product safety authorities, or newly designated AI supervisory bodies depending on the sector.

Art.57(2): Additional Sandboxes

Art.57(2) allows:

For developers, joint sandboxes are particularly valuable: a single sandbox application can provide regulatory safe harbour across multiple EU jurisdictions simultaneously.

Sandbox TypeGeographic ScopeLegal Basis
NationalOne Member StateArt.57(1) mandatory
Regional/LocalSub-nationalArt.57(2) permissive
JointMultiple Member StatesArt.57(2) permissive
Sector-specificAny geography, one sectorArt.57(2) permissive

Art.57(3)–(4): What a Regulatory Sandbox Provides

Art.57(3): The Controlled Environment Framework

Art.57(3) defines what an AI regulatory sandbox must provide: a controlled environment that facilitates the development, training, testing and validation of innovative AI systems for a limited time before their placing on the market or putting into service.

Four key constraints define the sandbox scope:

1. Controlled environment: not unrestricted testing — the environment is supervised and bounded by the sandbox plan

2. Lifecycle coverage: development, training, testing, and validation are all in scope — the sandbox can be used at any pre-market stage of AI development

3. Innovative AI systems: the eligibility criterion is innovation, not specific technical characteristics. AI systems that would qualify as high-risk under Art.6 and Annex III are the primary sandbox target, but Art.57(3) does not restrict to high-risk systems only

4. Limited time: sandboxes operate under a time-bounded sandbox plan — there is no permanent or indefinite sandbox participation

Sandbox plan requirement: Art.57(3) establishes that sandbox participation operates pursuant to a specific sandbox plan agreed between the prospective provider and the relevant competent authority. The sandbox plan is the central legal document of the sandbox relationship.

Art.57(4): Real-World Testing Within Sandboxes

Art.57(4) expressly permits testing in real-world conditions supervised within the AI regulatory sandbox. This is a significant provision: it allows sandbox participants to move beyond pure laboratory testing and interact with actual users, data, or systems — but under supervisory oversight within the sandbox framework.

Real-world testing under Art.57(4) is distinct from Art.58, which covers real-world testing outside sandboxes in public spaces. Art.57(4) testing remains within the sandbox framework; Art.58 testing operates under a different set of conditions that includes additional safeguards for affected persons.


Art.57(5): Good-Faith Obligation of Prospective Providers

Art.57(5) imposes a good-faith obligation on sandbox participants. Prospective providers must:

What Good Faith Requires in Practice

The good-faith obligation has concrete operational implications:

ObligationImplementation
Follow sandbox plan conditionsDo not exceed agreed testing scope, user population, or data volumes
Report unexpected risksProactively notify supervisor of identified risks, not just scheduled reports
Respond to supervisory queriesCooperate with all authority requests; no obstruction
No gaming the sandboxSandbox is not a mechanism to avoid compliance — it is a supervised path toward it
Correct failuresImplement mitigation when risks are identified; do not suppress negative results

Enforcement consequence: if the competent authority determines a prospective provider is not acting in good faith, the sandbox can be terminated. Termination eliminates the regulatory safe harbour for any testing conducted after the good-faith obligation was breached.


Art.57(6): Supervisory Guidance and Support

Art.57(6) defines the competent authority's obligations within the sandbox — not just oversight powers, but active support functions:

This support function is particularly valuable for startups and SMEs that lack dedicated legal and compliance teams. The competent authority in a well-functioning sandbox operates as a regulatory partner during development, not just a post-market enforcement body.

Risk Identification Mandate

Art.57(6) specifically lists fundamental rights, health and safety as risk dimensions the authority must monitor. The competent authority must identify:


Art.57(7)–(8): Rules Publication and SME Priority Access

Art.57(7): Sandbox Rules Publication Deadline

Art.57(7) requires competent authorities to publish AI regulatory sandbox rules by 2 August 2026. The rules must specify:

Authorities must also inform the AI Office and the Board (Art.65-66) about sandbox establishment and outcomes. This creates a Union-level coordination and learning mechanism.

Art.57(8): SME and Start-Up Priority Access

Art.57(8) creates a mandatory priority access right for SMEs, including start-ups. This is not a best-efforts provision — "shall have priority access" is a binding obligation on competent authorities.

Priority access means:

SME definition: the EU AI Act uses the standard EU definition (Commission Recommendation 2003/361/EC): fewer than 250 employees and annual turnover ≤ EUR 50 million or balance sheet ≤ EUR 43 million.

Eligibility CategoryPriorityApplication Track
Start-ups (< 3 years, any size)HighestFast-track
Micro enterprises (< 10 FTE, ≤ EUR 2M)HighestFast-track
Small enterprises (< 50 FTE, ≤ EUR 10M)HighExpedited
Medium enterprises (< 250 FTE, ≤ EUR 50M)Standard priorityStandard
Large enterprisesNo priorityStandard

Art.57(9): Preservation of Supervisory Powers

Art.57(9) is a critical constraint: sandbox participation does not prejudice the supervisory and corrective powers of the competent authorities.

The sandbox is not an immunity zone. If the competent authority identifies significant risks to health, safety, or fundamental rights during sandbox testing, it must require:

  1. Adequate mitigation: proportionate corrective measures to address the identified risk
  2. Suspension: if mitigation is not achievable, the development and testing process must be suspended

The suspension power is unconditional — if a sandbox participant cannot mitigate an identified fundamental rights or safety risk, the sandbox must stop. There is no "accept the risk and continue" pathway within a regulatory sandbox.

What Art.57(9) Means for Sandbox Planning

Developers should build their sandbox plan with Art.57(9) in mind:


Art.57(10): Personal Data Processing in Sandboxes

Art.57(10) is a critical provision for AI development: it addresses personal data processing in the sandbox context in a way that creates important permissions and constraints.

The Permission

The competent authority shall ensure that prospective providers of high-risk AI systems can test with personal data that is collected for the original purpose under the respective legal basis for processing.

This provision enables AI developers to use existing datasets for sandbox testing without requiring a new legal basis for each new AI system being developed — provided the data was originally collected under a valid GDPR legal basis and the sandbox testing falls within the original purpose.

The Constraint

Art.57(10) imposes a strict constraint: data used in sandbox testing shall not be used for training and validating other AI systems. The data use is sandboxed within the specific AI system being tested under the sandbox plan.

GDPR × Art.57(10) Interaction

GDPR ElementArt.57(10) Effect
Legal basis for processingOriginal legal basis carries over to sandbox testing
Purpose limitation (Art.5(1)(b) GDPR)Sandbox testing within original purpose = permitted
Data minimisation (Art.5(1)(c))Sandbox plan should specify minimum necessary data scope
Storage limitation (Art.5(1)(e))Sandbox data retention schedule required
Art.9 special categoriesRequire explicit analysis — sandbox does not override Art.9
Cross-border transfersAdequacy/SCC obligations still apply

Practical Implication: No Universal Data Permission

Art.57(10) does not create a blanket data processing permission. It does not override:


Art.57(11): Continued Liability During Sandbox Testing

Art.57(9) preserves supervisory powers; Art.57(11) (by established interpretation) preserves civil liability. Prospective providers participating in the AI regulatory sandbox remain liable under applicable EU and Member State law on liability for any damage inflicted on third parties as a result of experimentation in the sandbox.

This means:

Developer Implication: Insurance and Indemnification

For sandbox participants conducting real-world testing under Art.57(4):


Art.57(12)–(14): Union-Level Coordination

Art.57(12): Learning Loop for AI Act Guidance

Art.57(12) creates a feedback mechanism: sandbox results shall be relevant for the purposes of guidance and technical support under Art.96 and Art.97 (SME implementation guidance and Commission guidance).

This means sandbox outcomes directly inform the guidance that the Commission and AI Office publish. Sandbox participants who identify novel compliance challenges contribute to the regulatory learning process.

Art.57(13)–(14): AI Office and Board Coordination

The AI Office and the Board (Art.65-66) must:

For developers, this coordination matters: a sandbox experience in Germany can inform how France or Spain approach equivalent testing. Joint sandboxes under Art.57(2) can directly leverage this coordination.


The Sandbox Plan: Architecture and Required Content

The sandbox plan is the central legal document of the Art.57 regime. It is agreed between the prospective provider and the competent authority before sandbox access is granted.

Sandbox Plan Required Elements

Based on Art.57(3)–(9) requirements, a compliant sandbox plan must address:

ElementDescription
AI system descriptionTechnical description of the AI system under development
Development stageWhich phase of development/training/testing/validation
Testing scopeUser population, data volumes, geographic scope
DurationStart date, end date, review milestones
Risk identification protocolHow risks will be identified and reported to authority
Suspension criteriaConditions under which testing stops (Art.57(9))
Personal data processingData sources, legal basis, Art.57(10) constraints
Real-world testing conditionsIf applicable under Art.57(4): supervised conditions
Good-faith obligationsSpecific reporting schedules and authority interaction
Liability frameworkInsurance, indemnification, third-party agreements
Post-sandbox compliance pathwayHow the provider will achieve full compliance before market placement

Post-Sandbox Compliance: What Sandbox Does Not Do

Sandbox participation does not create a presumption of compliance for the deployed system. When a sandbox participant exits the sandbox and places the AI system on the market, full compliance with all applicable obligations is required:

The sandbox de-risks development by providing guided compliance preparation — but exit from the sandbox means full compliance requirements activate.


CLOUD Act Jurisdiction Risk in AI Regulatory Sandboxes

For AI developers using US-headquartered cloud infrastructure for sandbox testing, the CLOUD Act creates a significant risk that is not addressed by the sandbox framework itself.

The Risk Scenario

Under the US CLOUD Act (18 U.S.C. § 2713), US cloud providers are obligated to produce data stored anywhere in the world when served with a lawful US court order or warrant. A sandbox participant testing an AI system on AWS, Azure, or Google Cloud stores:

All of this is potentially subject to CLOUD Act compellability — even if the sandbox participant is a European SME, even if the sandbox is EU-regulatory-authority-supervised, and even if the data relates to a development-stage AI system that has not yet been placed on the market.

The Dual-Compellability Problem

Document TypeEU/MSA AccessUS CLOUD Act Risk
Sandbox planArt.57(6) supervisory accessCompellable if stored US cloud
Training dataArt.57(10) controlled accessCompellable if stored US cloud
Risk assessmentsArt.57(6) supervisionCompellable if stored US cloud
Test results and model outputsSandbox plan termsCompellable if stored US cloud
Authority correspondenceSandbox relationshipCompellable if stored US cloud

Mitigation: EU-Sovereign Sandbox Infrastructure

For sandbox participants who need single-jurisdiction data governance:

Deploying sandbox infrastructure on sota.io — an EU-established platform with Germany datacenter and no US data transfer requirements — ensures that sandbox test data, risk assessments, and authority correspondence remain in a single EU legal order.


Python Implementation: Sandbox Eligibility and Plan Management

from dataclasses import dataclass, field
from datetime import date, timedelta
from typing import Optional
from enum import Enum


class SandboxType(Enum):
    NATIONAL = "national"
    REGIONAL = "regional"
    JOINT_CROSS_BORDER = "joint_cross_border"
    SECTOR_SPECIFIC = "sector_specific"


class ParticipantCategory(Enum):
    STARTUP = "startup"               # < 3 years old, any size
    MICRO = "micro"                   # < 10 FTE, ≤ EUR 2M turnover
    SMALL = "small"                   # < 50 FTE, ≤ EUR 10M turnover
    MEDIUM = "medium"                 # < 250 FTE, ≤ EUR 50M turnover
    LARGE = "large"                   # Does not qualify as SME


class SandboxStage(Enum):
    DEVELOPMENT = "development"
    TRAINING = "training"
    TESTING = "testing"
    VALIDATION = "validation"


class RiskLevel(Enum):
    LOW = "low"
    MEDIUM = "medium"
    HIGH = "high"
    CRITICAL = "critical"             # Triggers Art.57(9) suspension review


@dataclass
class SandboxEligibilityAssessment:
    """
    Art.57(3)(8): Assess eligibility for AI regulatory sandbox participation.
    SMEs and start-ups have priority access under Art.57(8).
    """
    company_name: str
    employees_fte: int
    annual_turnover_eur: float
    company_age_years: float
    ai_system_description: str
    target_sandbox_country: str  # ISO 3166-1 alpha-2

    @property
    def participant_category(self) -> ParticipantCategory:
        if self.company_age_years < 3:
            return ParticipantCategory.STARTUP
        if self.employees_fte < 10 and self.annual_turnover_eur <= 2_000_000:
            return ParticipantCategory.MICRO
        if self.employees_fte < 50 and self.annual_turnover_eur <= 10_000_000:
            return ParticipantCategory.SMALL
        if self.employees_fte < 250 and self.annual_turnover_eur <= 50_000_000:
            return ParticipantCategory.MEDIUM
        return ParticipantCategory.LARGE

    @property
    def has_priority_access(self) -> bool:
        """Art.57(8): SMEs including start-ups shall have priority access."""
        return self.participant_category in {
            ParticipantCategory.STARTUP,
            ParticipantCategory.MICRO,
            ParticipantCategory.SMALL,
            ParticipantCategory.MEDIUM,
        }

    @property
    def priority_tier(self) -> str:
        mapping = {
            ParticipantCategory.STARTUP: "highest",
            ParticipantCategory.MICRO: "highest",
            ParticipantCategory.SMALL: "high",
            ParticipantCategory.MEDIUM: "standard_priority",
            ParticipantCategory.LARGE: "no_priority",
        }
        return mapping[self.participant_category]

    def eligibility_report(self) -> dict:
        return {
            "company": self.company_name,
            "category": self.participant_category.value,
            "priority_access": self.has_priority_access,
            "priority_tier": self.priority_tier,
            "eligible": True,  # All innovators may apply; priority determines track
            "fast_track": self.participant_category in {
                ParticipantCategory.STARTUP,
                ParticipantCategory.MICRO,
            },
        }


@dataclass
class PersonalDataSandboxPlan:
    """
    Art.57(10): Personal data processing rules in the AI regulatory sandbox.
    Data collected for original purpose may be used; must not train other AI systems.
    """
    data_sources: list[str]
    original_legal_basis: str           # GDPR Art.6 or Art.9 legal basis
    original_collection_purpose: str
    sandbox_testing_purpose: str
    is_within_original_purpose: bool    # Must be True for Art.57(10) to apply
    special_categories_involved: bool   # Art.9 GDPR — requires additional analysis
    cross_border_transfer: bool

    def validate(self) -> list[str]:
        """Return list of compliance issues."""
        issues = []
        if not self.is_within_original_purpose:
            issues.append(
                "Art.57(10) permission requires sandbox testing to be within "
                "the original collection purpose. Establish new legal basis or "
                "adjust testing scope."
            )
        if self.special_categories_involved:
            issues.append(
                "Art.9 GDPR special category data requires explicit legal basis "
                "analysis beyond Art.57(10) permission. Art.57(10) does not "
                "override Art.9 restrictions."
            )
        if self.cross_border_transfer:
            issues.append(
                "Cross-border data transfer: GDPR Chapter V obligations still "
                "apply. Adequacy decision or SCC required even in sandbox."
            )
        return issues


@dataclass
class SandboxSuspensionCriteria:
    """
    Art.57(9): Conditions under which sandbox testing must be suspended.
    Significant risks to health, safety, or fundamental rights that cannot be mitigated.
    """
    fundamental_rights_risk_threshold: RiskLevel = RiskLevel.HIGH
    health_safety_risk_threshold: RiskLevel = RiskLevel.HIGH
    unmitigable_risk_suspension: bool = True    # Always true per Art.57(9)

    def requires_suspension(self, identified_risk: RiskLevel, mitigation_available: bool) -> bool:
        """
        Art.57(9): If significant risk identified and mitigation not achievable,
        development and testing process must be suspended.
        """
        is_significant = identified_risk in {RiskLevel.HIGH, RiskLevel.CRITICAL}
        if is_significant and not mitigation_available:
            return True
        return False


@dataclass
class SandboxPlan:
    """
    Art.57(3): Formal sandbox plan agreed between prospective provider and
    competent authority. Central legal document of the sandbox relationship.
    """
    # Parties
    provider_name: str
    competent_authority: str
    sandbox_type: SandboxType
    member_state: str

    # AI System
    ai_system_name: str
    ai_system_description: str
    development_stage: SandboxStage
    is_high_risk_candidate: bool        # Will be high-risk under Art.6/Annex III

    # Duration (Art.57(3): limited time)
    start_date: date
    end_date: date
    review_milestones: list[date] = field(default_factory=list)

    # Scope (Art.57(3))
    testing_scope_description: str = ""
    max_users_in_testing: Optional[int] = None
    real_world_testing: bool = False    # Art.57(4)

    # Participant
    eligibility: Optional[SandboxEligibilityAssessment] = None

    # Data (Art.57(10))
    personal_data_plan: Optional[PersonalDataSandboxPlan] = None

    # Safety (Art.57(9))
    suspension_criteria: SandboxSuspensionCriteria = field(
        default_factory=SandboxSuspensionCriteria
    )

    # Infrastructure
    infrastructure_jurisdiction: str = ""   # Should be "EU" for CLOUD Act mitigation
    infrastructure_provider: str = ""

    # Post-sandbox path
    post_sandbox_compliance_articles: list[str] = field(
        default_factory=lambda: [
            "Art.9 (Risk Management)",
            "Art.10 (Data Governance)",
            "Art.13 (Transparency)",
            "Art.14 (Human Oversight)",
            "Art.17 (QMS)",
            "Art.22 (Registration EUAIDB)",
            "Art.43 (Conformity Assessment)",
            "Art.48 (Declaration of Conformity)",
            "Art.49 (CE Marking)",
        ]
    )

    @property
    def duration_days(self) -> int:
        return (self.end_date - self.start_date).days

    @property
    def cloud_act_risk(self) -> str:
        if self.infrastructure_jurisdiction.upper() != "EU":
            return (
                f"HIGH: {self.infrastructure_provider} infrastructure outside EU. "
                "Sandbox test data may be subject to CLOUD Act compellability."
            )
        return "LOW: EU-sovereign infrastructure. Single EU legal order."

    def validate(self) -> list[str]:
        issues = []
        if self.duration_days <= 0:
            issues.append("Sandbox end date must be after start date.")
        if self.duration_days > 730:
            issues.append(
                "Sandbox duration exceeds 2 years. Competent authorities may "
                "require justification for extended sandbox access."
            )
        if not self.review_milestones:
            issues.append(
                "No review milestones defined. Art.57(6) supervision requires "
                "regular interaction — define milestone check-in dates."
            )
        if self.personal_data_plan:
            data_issues = self.personal_data_plan.validate()
            issues.extend(data_issues)
        if self.infrastructure_jurisdiction.upper() != "EU":
            issues.append(
                f"Cloud Act Risk: infrastructure in {self.infrastructure_jurisdiction}. "
                "Consider EU-sovereign infrastructure for single-regime data governance."
            )
        return issues

    def to_summary(self) -> dict:
        return {
            "provider": self.provider_name,
            "authority": self.competent_authority,
            "sandbox_type": self.sandbox_type.value,
            "ai_system": self.ai_system_name,
            "stage": self.development_stage.value,
            "duration_days": self.duration_days,
            "start": str(self.start_date),
            "end": str(self.end_date),
            "real_world_testing": self.real_world_testing,
            "sme_priority": self.eligibility.has_priority_access if self.eligibility else None,
            "cloud_act_risk": self.cloud_act_risk,
            "validation_issues": self.validate(),
        }


# --- Usage Example ---

def build_sandbox_plan_example() -> SandboxPlan:
    """
    Example: EU startup testing a high-risk AI recruitment system
    in a national sandbox before market placement.
    """
    eligibility = SandboxEligibilityAssessment(
        company_name="RecruiterAI GmbH",
        employees_fte=12,
        annual_turnover_eur=800_000,
        company_age_years=1.5,
        ai_system_description="CV screening AI with bias detection for financial sector recruitment",
        target_sandbox_country="DE",
    )

    data_plan = PersonalDataSandboxPlan(
        data_sources=["HR partner historical CV dataset (anonymised)", "Synthetic CV generator"],
        original_legal_basis="Art.6(1)(f) GDPR legitimate interests — HR analytics",
        original_collection_purpose="HR analytics and workforce planning",
        sandbox_testing_purpose="Training and validating AI recruitment screening model",
        is_within_original_purpose=True,
        special_categories_involved=False,
        cross_border_transfer=False,
    )

    plan = SandboxPlan(
        provider_name="RecruiterAI GmbH",
        competent_authority="Bundesnetzagentur AI Regulatory Sandbox (DE)",
        sandbox_type=SandboxType.NATIONAL,
        member_state="DE",
        ai_system_name="RecruiterAI v0.9",
        ai_system_description=(
            "AI system for CV screening in financial sector recruitment. "
            "Annex III 1(a) candidate — employment decision assistance system."
        ),
        development_stage=SandboxStage.TESTING,
        is_high_risk_candidate=True,  # Annex III 1(a): employment decisions
        start_date=date(2026, 9, 1),
        end_date=date(2027, 3, 1),
        review_milestones=[
            date(2026, 10, 1),
            date(2026, 12, 1),
            date(2027, 2, 1),
        ],
        testing_scope_description="Testing with 50 HR professionals at 3 financial sector partner firms",
        max_users_in_testing=50,
        real_world_testing=True,  # Art.57(4)
        eligibility=eligibility,
        personal_data_plan=data_plan,
        infrastructure_jurisdiction="EU",
        infrastructure_provider="sota.io (Frankfurt, Germany)",
    )
    return plan


if __name__ == "__main__":
    plan = build_sandbox_plan_example()
    print(f"Sandbox Plan Summary:")
    import json
    print(json.dumps(plan.to_summary(), indent=2, default=str))

    eligibility_report = plan.eligibility.eligibility_report()
    print(f"\nEligibility Assessment:")
    print(json.dumps(eligibility_report, indent=2))

40-Item Compliance Checklist: Art.57 AI Regulatory Sandbox

Sandbox Eligibility and Application

Sandbox Plan Development

Good-Faith Obligation (Art.57(5))

Real-World Testing (Art.57(4))

Personal Data Processing (Art.57(10))

Risk Management and Suspension (Art.57(9))

Liability and Insurance (Art.57 × EU AI Liability Framework)

Infrastructure and CLOUD Act

Post-Sandbox Compliance Pathway


Art.57 Cross-Article Matrix

ArticleObligationArt.57 Interaction
Art.9Risk management systemSandbox de-risks Art.9 compliance — risk identification under supervisory guidance
Art.10Data governanceArt.57(10) personal data permission limited to original purpose — feeds Art.10 data documentation
Art.13TransparencyNot suspended in sandbox — sandbox plan should address transparency obligations
Art.14Human oversightNot suspended in sandbox — human oversight tested in sandbox environment
Art.17Quality managementSandbox plan is a precursor to the QMS — structured development process
Art.22EU database registrationRequired upon market placement — sandbox does not create registration obligation
Art.43Conformity assessmentRequired post-sandbox — sandbox prepares but does not substitute
Art.48Declaration of conformityRequired post-sandbox — sandbox findings inform DoC documentation basis
Art.57(4)Real-world testingDistinct from Art.58 (testing outside sandbox) — different conditions apply
Art.58Testing outside sandboxArt.57 sandbox = supervised controlled environment; Art.58 = broader real-world conditions
Art.59Personal data for AI developmentArt.57(10) addresses sandbox-specific data rules; Art.59 addresses broader public interest processing
Art.71EUAIDB registrationPost-sandbox obligation — sandbox participation does not require registration
Art.96SME implementation guidanceArt.57(12): sandbox results inform guidance development
Art.97Commission technical guidanceArt.57(12): sandbox results inform Commission guidance under Art.97

EU-Native Infrastructure and Art.57

For AI developers building in EU-sovereign infrastructure during sandbox participation, Art.57 creates a three-layer compliance architecture:

Layer 1: Single-Regime Data Governance

When sandbox testing infrastructure is EU-sovereign:

Layer 2: Clean Post-Sandbox Compliance Path

The Art.9 risk file developed during sandbox testing can state:

When the AI system exits the sandbox and enters market placement, this clean infrastructure record supports the Art.17 QMS documentation and the Art.48 Declaration of Conformity.

Layer 3: Authority Trust and Good-Faith Compliance

EU-sovereign infrastructure for sandbox testing demonstrates good-faith compliance (Art.57(5)) in a concrete way: the provider can show that all sandbox data remains within the supervisory jurisdiction of the competent authority without extraterritorial access risk.


See Also