2026-04-23·13 min read·

EU AI Act Art.44: AI Regulatory Sandboxes — Testing High-Risk AI Systems in Controlled Environments (2026)

Article 44 of the EU AI Act establishes a deliberately created exception to the otherwise strict compliance architecture of the regulation: the AI regulatory sandbox. Sandboxes allow AI developers — particularly startups, SMEs, and research organisations — to develop, test, and validate AI systems, including high-risk AI systems, under direct regulatory supervision without being required to meet the full set of conformity obligations that would apply outside the sandbox. The mechanism is the EU AI Act's primary instrument for balancing innovation against risk: acknowledging that the compliance burden for high-risk AI could deter early-stage development, it creates a space where that burden is temporarily reduced in exchange for regulatory oversight.

For developers building AI systems that would qualify as high-risk under Art.6 and Annex III, or for startups exploring foundation model architectures that could fall within GPAI obligations, the regulatory sandbox framework defines the only formal mechanism in the EU AI Act for developing and validating systems under reduced compliance obligations with legal certainty. Understanding how the sandbox is structured, what it suspends, what it maintains, and how to participate is increasingly relevant as national supervisory authorities begin establishing their sandbox programmes.

What the Sandbox Is and Is Not

The regulatory sandbox is a supervised testing environment, not a general compliance waiver. This distinction matters: sandbox participants are not exempt from the EU AI Act — they are operating under a different compliance track that substitutes regulatory supervision for market-facing conformity assessment.

The sandbox provides three concrete benefits. First, certain obligations that would apply to market-facing deployment — particularly the full conformity assessment procedure under Art.43, the requirements for a conformity assessment body review where one is required, and the registration obligations under Art.49 — are suspended for AI systems developed and tested exclusively within the sandbox. Second, participants receive direct guidance from the national competent authority operating the sandbox, which creates a dialogue with the regulator before the system reaches the conformity assessment stage — meaning design choices can be validated against regulatory expectations before being locked in. Third, the AI Office and national authorities treat sandbox participation as evidence of good-faith compliance engagement, which can reduce friction in subsequent market authorisation processes.

The sandbox does not suspend obligations that exist independently of the EU AI Act. GDPR continues to apply to any personal data processed during sandbox testing, subject only to the specific accommodations in Art.59. Fundamental rights obligations remain in force. Criminal law and liability frameworks are unaffected. Sandbox participation does not grant immunity from civil or regulatory action for harms that occur during testing.

Member State Obligations to Establish Sandboxes

The EU AI Act places a legal obligation on Member States to establish at least one national AI regulatory sandbox by a specified date, either independently or jointly with other Member States. The requirement to establish a sandbox is not discretionary: national competent authorities must create the infrastructure for sandbox participation as part of implementing the regulation.

The practical consequence is that by the time the regulatory sandbox obligations become applicable, every EU Member State must have operational sandbox procedures through which AI developers can apply for supervised testing. The AI Office publishes information on national sandboxes and can facilitate cross-border access for developers whose business operations or testing activities span multiple jurisdictions.

Joint sandboxes — operated cooperatively by two or more Member States — are explicitly envisioned and encouraged. A joint sandbox is particularly relevant for AI systems designed for cross-border deployment, where single-jurisdiction supervision would create an incomplete picture of the regulatory context the system will face.

Access Criteria: Who Can Participate

Regulatory sandboxes are not universally available — they are targeted instruments and prioritise specific categories of participants. The EU AI Act establishes a priority framework that national competent authorities must apply when evaluating sandbox applications.

Priority participants: SMEs and startups

The regulation explicitly prioritises SMEs and startups in sandbox access. This is a deliberate policy choice: the compliance burden for high-risk AI is calibrated for the capabilities of large organisations with dedicated legal and compliance teams, and the regulation acknowledges that applying the same burden to early-stage companies could effectively prevent them from building in regulated AI categories. Sandbox access creates a path for SMEs to develop high-risk AI systems with regulatory guidance rather than being forced to choose between premature market exit and non-compliance.

A startup does not need to demonstrate imminent market readiness to apply for a sandbox. Early-stage development — including proof-of-concept, prototype validation, and capability testing — can be conducted within the sandbox. The key criterion is that the AI system under development is innovative and could plausibly qualify as high-risk or otherwise subject to substantive EU AI Act obligations.

Other eligible participants

Beyond the priority tier, research organisations, universities, and private companies developing genuinely innovative AI systems can access sandboxes where capacity allows. Public sector bodies developing AI for government use-cases may also participate where the applicable AI systems would fall within the regulation's scope.

The innovation criterion

All sandbox applications must demonstrate that the AI system being developed is innovative — meaning it is not simply an incremental variation of an existing category of AI system but involves novel architecture, novel use-cases, or novel combinations of capabilities that create genuine regulatory uncertainty. The innovation criterion prevents sandboxes from becoming a mechanism for established market participants to delay compliance obligations on systems they could qualify through standard conformity assessment channels.

Obligations Suspended During Sandbox Participation

The core benefit of sandbox participation is the suspension of specific compliance obligations that would otherwise apply to high-risk AI systems. The suspension is not a blanket exemption but a targeted reduction of the procedural obligations associated with market-facing conformity assessment.

Conformity assessment procedure

The most significant suspension is the conformity assessment requirement under Art.43. High-risk AI systems listed in Annex III categories are required, outside the sandbox, to undergo either a self-assessment conformity procedure (most categories) or third-party conformity assessment by a notified body (certain higher-risk categories). Within the sandbox, this requirement is suspended: the developer can build, train, and test without completing the conformity assessment procedure.

This suspension has practical implications for AI system architecture: decisions that would otherwise need to be made with final conformity assessment criteria in mind can be made iteratively, with regulatory guidance incorporated progressively rather than locked in at design stage.

Technical documentation pre-filing

The pre-filing technical documentation requirements under Art.11 — which require developers to compile extensive technical documentation before placing a high-risk AI system on the market — are similarly relaxed within the sandbox. Documentation must still be maintained to support regulatory supervision, but it need not meet the full standard required for market authorisation. Sandbox participants build documentation iteratively, which allows the regulatory authority to provide feedback on documentation completeness and format before the system exits the sandbox.

Third-party audit requirements

For AI systems that would require notified body involvement outside the sandbox, the sandbox framework substitutes the competent authority's supervision for notified body auditing during the development and testing phase. The developer works directly with the regulatory authority rather than engaging a conformity assessment body.

Obligations That Remain Mandatory

Suspension is targeted: a defined set of obligations cannot be suspended regardless of sandbox status, and participants who treat sandbox participation as a compliance-free zone are misunderstanding the framework.

Safety requirements

All safety-relevant obligations remain in force. An AI system that causes harm during sandbox testing — whether to users, test subjects, operators, or third parties — is subject to the same liability and regulatory response as a harm caused outside the sandbox. The sandbox does not create a consequence-free testing environment: it creates a supervised testing environment where certain procedural compliance obligations are reduced.

Transparency to participants

Any human beings who interact with an AI system during sandbox testing — including test users, evaluation subjects, and operators — retain all rights they would have outside the sandbox. They must be informed that they are interacting with an AI system in a testing context. Consent requirements, information obligations, and human oversight requirements that protect individual rights remain applicable.

Prohibited practices

The prohibition provisions of Art.5 — which cover AI practices that are absolutely prohibited regardless of context, including subliminal manipulation, social scoring, and prohibited biometric surveillance uses — remain fully applicable inside the sandbox. No sandbox framework can authorise development or testing of systems that the EU AI Act prohibits categorically.

Fundamental rights protections

The GDPR, the Charter of Fundamental Rights, and other applicable EU law remain fully operative. The sandbox creates regulatory accommodation within the EU AI Act; it does not create a carve-out from the broader EU legal framework.

Personal Data Processing in Sandboxes

One of the most practically significant provisions in the regulatory sandbox framework concerns personal data. AI development and testing frequently requires access to real personal data — training datasets that include personal information, test datasets that represent realistic user behaviour, validation sets that contain sensitive categories of data. Outside the sandbox, processing this data requires a lawful basis under GDPR Art.6 and, where applicable, Art.9, alongside full compliance with data minimisation, purpose limitation, and data subject rights obligations.

Art.59 of the EU AI Act (which governs the personal data processing framework for sandboxes, linked to the Art.44 sandbox access framework) establishes specific accommodations for data processing within the sandbox context.

Processing for legitimate public interest

Where the AI system under development serves a genuine public interest purpose — and the national competent authority has determined that sandbox participation is warranted on this basis — the sandbox framework can facilitate data processing that would otherwise require more complex lawful basis analysis under GDPR. The competent authority's sandbox authorisation constitutes a recognised basis for processing in the defined testing context.

Existing datasets: research and development reuse

GDPR Art.5(1)(b) allows further processing of personal data for scientific research purposes to be compatible with the original processing purpose, subject to appropriate safeguards. The sandbox framework strengthens this accommodation: personal data that was collected for a purpose compatible with AI research and development can be used within the sandbox without requiring re-consent from data subjects, provided the processing is subject to the technical and organisational safeguards that the sandbox supervision framework requires.

This is particularly relevant for healthcare AI, financial services AI, and any AI system that requires training on domain-specific sensitive data: the sandbox creates a pathway for accessing datasets that would otherwise be practically unavailable due to consent and purpose limitation constraints.

Pseudonymisation requirements

Data processing within the sandbox is subject to enhanced pseudonymisation requirements. Before personal data is used for training or testing within the sandbox, it must be pseudonymised to the maximum extent technically feasible for the specific testing objective. Where testing requires access to de-identified data, the competent authority may require additional technical safeguards as a condition of sandbox access.

Data retention limitations

Personal data processed within the sandbox may only be retained for the duration of the sandbox participation and must be deleted or anonymised upon exit. The sandbox does not create a permanent data retention entitlement: data collected for sandbox testing cannot be repurposed for production model training after the sandbox period concludes without establishing an independent lawful basis for that further processing.

Duration, Extension, and Exit

Standard duration: 12 months

Sandbox participation is time-limited. The standard duration is 12 months, reflecting the principle that the sandbox is a transitional space for development and validation, not a permanent alternative compliance track. The 12-month period begins from the date on which the competent authority confirms sandbox admission and is intended to provide sufficient time for a development cycle that culminates in a market-ready system capable of entering standard conformity assessment.

Extension conditions

A one-time extension of 12 months — bringing total sandbox duration to 24 months — is available where the developer can demonstrate that additional development time is necessary for reasons beyond the developer's control, or where the AI system under development is of sufficient complexity or novelty that the standard 12-month period is genuinely insufficient. The competent authority has discretion on extension approval and will typically require a progress report and development roadmap that justifies the extension request.

Sandbox exit and transition to market

Exiting the sandbox does not automatically trigger full compliance obligations: it triggers the obligation to complete the compliance steps that sandbox participation suspended. A high-risk AI system that was developed within the sandbox must complete the conformity assessment procedure — using the documentation and findings generated during sandbox development — before it is placed on the market. The sandbox period should be used to build the documentation base and resolve the design questions that the conformity assessment will evaluate.

The competent authority that supervised the sandbox is available during the exit process to advise on the transition to standard conformity assessment, which in practice means that sandbox participants can enter the conformity assessment process with substantially higher confidence in their compliance posture than a developer who has not had regulatory supervision during development.

Cross-Border Sandbox Cooperation

AI systems designed for deployment across multiple EU Member States face the challenge that a single national sandbox may not provide complete regulatory coverage for the jurisdictions where the system will operate. The EU AI Act's sandbox framework addresses this through cross-border cooperation provisions.

The AI Office facilitates coordination between national competent authorities operating sandboxes, and joint sandbox arrangements — where two or more national authorities jointly supervise a development programme — are explicitly enabled. For AI systems with inherently cross-border deployment profiles (European-scale B2C platforms, AI systems integrated into pan-European critical infrastructure, GPAI models intended for distribution across the EU single market), a joint sandbox is often the more appropriate instrument than a single-Member-State sandbox.

Cross-border sandbox arrangements also resolve a practical ambiguity that arises when a developer is established in one Member State but targets users or systems primarily in another: which authority's sandbox should the developer apply to? The joint sandbox framework allows both or all relevant authorities to participate in supervision, avoiding the risk that national sandbox participation in the developer's home jurisdiction fails to provide useful regulatory feedback for the target jurisdictions.

Understanding Art.44's sandbox framework requires placing it in the architecture of the obligations it interacts with.

Art.44 × Art.6-7 (High-Risk Classification)

The sandbox framework is most significant for AI systems that would qualify as high-risk under Art.6 and Annex III. The classification criteria define the scope of systems for which sandbox participation provides the most meaningful compliance benefit: systems in healthcare, employment, education, critical infrastructure management, law enforcement, migration and border control, and similar categories face the full conformity assessment architecture that the sandbox suspends. Systems that do not qualify as high-risk under these criteria face fewer obligations and derive less benefit from sandbox participation.

Art.44 × Art.43 (Conformity Assessment)

Art.43 specifies the conformity assessment procedure that sandbox participation suspends. For self-assessment categories, the developer would ordinarily conduct the assessment against the requirements of Chapter 2 (Art.8-15), compile the technical documentation, and issue the EU declaration of conformity. Within the sandbox, this sequence is replaced by the supervised development process, but all of the Art.43 requirements must eventually be satisfied on exit. The sandbox is preparation for conformity assessment, not an alternative to it.

Art.44 × Art.49 (Registration)

High-risk AI systems must be registered in the EU AI Act database before being placed on the market or put into service. Sandbox participation suspends this requirement during development and testing: systems under development in the sandbox do not need to be registered until they exit the sandbox and enter the market. Registration at exit is a condition of market authorisation, not an optional step.

Art.44 × GDPR

The GDPR framework for AI development in sandboxes requires careful analysis of lawful bases for each data processing activity. The sandbox framework accommodates specific GDPR friction points — particularly the purpose limitation constraint on using existing datasets for AI training — but it does not provide a blanket GDPR exemption. Developers planning to use personal data in sandbox testing should conduct a specific GDPR assessment of their data processing activities before sandbox admission, using the sandbox personal data rules as an input to that analysis rather than assuming that sandbox admission resolves all GDPR questions.

Python SandboxParticipationManager Implementation

The following implementation provides a structured model for tracking sandbox participation compliance obligations:

from dataclasses import dataclass, field
from datetime import date, timedelta
from enum import Enum
from typing import Optional

class SandboxStatus(Enum):
    APPLICATION_PENDING = "application_pending"
    ADMITTED = "admitted"
    ACTIVE = "active"
    EXTENSION_REQUESTED = "extension_requested"
    EXTENDED = "extended"
    EXITED = "exited"

class SandboxExitReason(Enum):
    MARKET_READY = "market_ready"
    DURATION_EXPIRED = "duration_expired"
    VOLUNTARY_EXIT = "voluntary_exit"
    REGULATORY_TERMINATION = "regulatory_termination"

class DataProcessingBasis(Enum):
    PUBLIC_INTEREST = "public_interest"
    RESEARCH_COMPATIBLE = "research_compatible"
    CONSENT = "consent"
    LEGITIMATE_INTEREST = "legitimate_interest"

@dataclass
class SandboxApplication:
    developer_name: str
    ai_system_description: str
    high_risk_category: str
    innovation_justification: str
    is_sme: bool
    is_startup: bool
    target_jurisdictions: list[str]
    personal_data_processing: bool
    data_categories: list[str] = field(default_factory=list)
    cross_border_required: bool = False

@dataclass
class PersonalDataPlan:
    processing_activities: list[str]
    lawful_basis: DataProcessingBasis
    pseudonymisation_measures: list[str]
    retention_period_days: int
    deletion_mechanism: str
    data_subject_notifications: list[str]

@dataclass
class SandboxParticipation:
    application: SandboxApplication
    competent_authority: str
    admission_date: Optional[date]
    standard_end_date: Optional[date]
    extended_end_date: Optional[date]
    status: SandboxStatus
    personal_data_plan: Optional[PersonalDataPlan]
    progress_reports: list[dict] = field(default_factory=list)
    suspended_obligations: list[str] = field(default_factory=list)
    maintained_obligations: list[str] = field(default_factory=list)

    def days_remaining(self) -> Optional[int]:
        if self.status not in (SandboxStatus.ACTIVE, SandboxStatus.EXTENDED):
            return None
        end = self.extended_end_date or self.standard_end_date
        if not end:
            return None
        return (end - date.today()).days

    def extension_eligible(self) -> bool:
        if self.status != SandboxStatus.ACTIVE:
            return False
        if not self.standard_end_date:
            return False
        days_left = (self.standard_end_date - date.today()).days
        return days_left <= 60

    def compliance_posture(self) -> dict:
        return {
            "status": self.status.value,
            "days_remaining": self.days_remaining(),
            "extension_eligible": self.extension_eligible(),
            "suspended_count": len(self.suspended_obligations),
            "maintained_count": len(self.maintained_obligations),
            "progress_reports_submitted": len(self.progress_reports),
        }

class SandboxParticipationManager:
    SUSPENDED_IN_SANDBOX = [
        "conformity_assessment_art43",
        "notified_body_audit_art43",
        "technical_documentation_full_art11",
        "eu_declaration_of_conformity_art47",
        "ce_marking_art48",
        "registration_eu_database_art49",
    ]

    ALWAYS_MANDATORY = [
        "prohibited_practices_art5_absolute",
        "fundamental_rights_protection",
        "gdpr_compliance_baseline",
        "safety_obligations_art9",
        "transparency_to_test_participants",
        "human_oversight_during_testing",
        "incident_reporting_if_harm_occurs",
        "data_subject_rights_during_testing",
    ]

    def __init__(self):
        self.participations: list[SandboxParticipation] = []

    def create_application(
        self,
        developer_name: str,
        ai_system_description: str,
        high_risk_category: str,
        innovation_justification: str,
        is_sme: bool = False,
        is_startup: bool = False,
        target_jurisdictions: list[str] = None,
        personal_data_processing: bool = False,
        data_categories: list[str] = None,
    ) -> SandboxApplication:
        return SandboxApplication(
            developer_name=developer_name,
            ai_system_description=ai_system_description,
            high_risk_category=high_risk_category,
            innovation_justification=innovation_justification,
            is_sme=is_sme,
            is_startup=is_startup,
            target_jurisdictions=target_jurisdictions or [],
            personal_data_processing=personal_data_processing,
            data_categories=data_categories or [],
            cross_border_required=len(target_jurisdictions or []) > 1,
        )

    def admit_to_sandbox(
        self,
        application: SandboxApplication,
        competent_authority: str,
        admission_date: date,
        personal_data_plan: Optional[PersonalDataPlan] = None,
    ) -> SandboxParticipation:
        standard_end = admission_date + timedelta(days=365)
        participation = SandboxParticipation(
            application=application,
            competent_authority=competent_authority,
            admission_date=admission_date,
            standard_end_date=standard_end,
            extended_end_date=None,
            status=SandboxStatus.ACTIVE,
            personal_data_plan=personal_data_plan,
            suspended_obligations=self.SUSPENDED_IN_SANDBOX.copy(),
            maintained_obligations=self.ALWAYS_MANDATORY.copy(),
        )
        self.participations.append(participation)
        return participation

    def submit_progress_report(
        self,
        participation: SandboxParticipation,
        report_date: date,
        development_stage: str,
        regulatory_findings: list[str],
        open_issues: list[str],
    ) -> dict:
        report = {
            "date": report_date.isoformat(),
            "development_stage": development_stage,
            "regulatory_findings": regulatory_findings,
            "open_issues": open_issues,
        }
        participation.progress_reports.append(report)
        return report

    def request_extension(
        self,
        participation: SandboxParticipation,
        justification: str,
    ) -> dict:
        if not participation.extension_eligible():
            return {
                "eligible": False,
                "reason": "Extension can only be requested within 60 days of standard end date",
            }
        participation.status = SandboxStatus.EXTENSION_REQUESTED
        if participation.standard_end_date:
            participation.extended_end_date = (
                participation.standard_end_date + timedelta(days=365)
            )
        return {
            "eligible": True,
            "justification": justification,
            "proposed_end_date": participation.extended_end_date.isoformat()
            if participation.extended_end_date
            else None,
        }

    def exit_sandbox(
        self,
        participation: SandboxParticipation,
        exit_reason: SandboxExitReason,
        next_steps: list[str],
    ) -> dict:
        participation.status = SandboxStatus.EXITED
        post_exit_obligations = [
            "complete_conformity_assessment_art43",
            "compile_full_technical_documentation_art11",
            "register_in_eu_database_art49",
            "delete_or_anonymise_sandbox_personal_data",
            "issue_eu_declaration_of_conformity_art47",
        ]
        return {
            "exit_date": date.today().isoformat(),
            "exit_reason": exit_reason.value,
            "next_steps": next_steps,
            "post_exit_obligations": post_exit_obligations,
            "competent_authority_contact": participation.competent_authority,
        }

    def generate_compliance_summary(self, participation: SandboxParticipation) -> str:
        posture = participation.compliance_posture()
        summary_lines = [
            f"AI Regulatory Sandbox Compliance Summary",
            f"Developer: {participation.application.developer_name}",
            f"System: {participation.application.ai_system_description}",
            f"Authority: {participation.competent_authority}",
            f"Status: {posture['status']}",
            f"Days Remaining: {posture['days_remaining']}",
            f"Extension Eligible: {posture['extension_eligible']}",
            f"Progress Reports Submitted: {posture['progress_reports_submitted']}",
            f"Suspended Obligations: {posture['suspended_count']}",
            f"Maintained Obligations: {posture['maintained_count']}",
        ]
        return "\n".join(summary_lines)

Art.44 Sandbox Application Checklist

Eligibility Assessment

Application Package

Personal Data Planning (if applicable)

Sandbox Operation

Sandbox Exit

The AI regulatory sandbox framework is one of the EU AI Act's most practical tools for startups and SMEs developing systems in regulated AI categories. Engaging with a sandbox early — when design decisions are still flexible — allows developers to incorporate regulatory feedback before those decisions become architecturally locked. The 12-month window is tight for complex system development, which makes starting the application process early, with a clear sandbox roadmap, a material advantage over waiting until near-market status to engage the regulatory authority.