EU AI Act Art.58 Real-World Testing Outside AI Regulatory Sandboxes — Developer Guide (2026)
EU AI Act Article 58 creates a second pathway for real-world AI testing in the EU — one that operates outside the formal sandbox regime of Art.57. While Art.57 requires joining a national AI regulatory sandbox under supervisory oversight, Art.58 allows providers to test high-risk AI systems in real-world conditions using a notification-based model: submit a Real-World Testing Plan to the relevant market surveillance authority, wait 30 days, and proceed if no objection is raised.
Art.58 answers a critical practical question: what if the national AI regulatory sandbox is not yet operational (Member States have until 2 August 2026), or the sandbox timeline does not fit the development cycle? Art.58 provides the answer: a direct testing pathway that requires regulatory notification and strict subject safeguards, but does not depend on sandbox availability or capacity.
Art.58 became applicable on 2 August 2025 as part of Chapter VI (Measures in Support of Innovation) of the EU AI Act (Regulation (EU) 2024/1689). Understanding Art.58 is essential for:
- Providers of high-risk AI systems who want to test in real-world conditions before market placement without joining a formal sandbox
- Clinical and regulated sector AI developers (healthcare, finance, HR, law enforcement) where sandbox availability may be limited but real-world testing is essential
- Multi-jurisdiction product teams deploying across multiple Member States who need a coordinated testing framework
- Platform providers whose customers may be conducting Art.58 testing and need to understand the safeguard obligations
This guide covers Art.58(1)–(10) in full, the Real-World Testing Plan architecture, informed consent requirements, vulnerable group protections, multi-jurisdiction coordination, CLOUD Act jurisdiction risk for testing data, and Python implementation for plan management and consent tracking.
Art.58 in the Chapter VI Innovation Framework
| Article | Mechanism | Approach |
|---|---|---|
| Art.57 | AI Regulatory Sandbox | Formal supervised environment, sandbox plan, competent authority partnership |
| Art.58 | Real-World Testing Outside Sandbox | Notification-based, 30-day implicit consent, independent testing with safeguards |
| Art.59 | Personal Data for AI Development | Further innovation measures for data access |
| Art.60–63 | Further Innovation Support | Access to pre-trained models, regulatory guidance, SME support |
Art.58 is complement, not substitute to Art.57. Providers may use Art.57 sandboxes for supervised development; Art.58 for more targeted real-world validation of systems approaching market readiness. The two regimes can be used sequentially: sandbox first, then Art.58 real-world testing before final market placement.
Art.58(1): Who Can Test and What Can Be Tested
The Eligible Provider
Art.58(1) grants the real-world testing right to providers and prospective providers of high-risk AI systems listed in Annex I and Annex III of the EU AI Act. The reference to "prospective providers" is significant — it means companies that intend to be providers at market placement, even if the system is not yet deployed.
Key eligibility points:
- Providers (Art.3(3)): entities that develop AI systems and place them on the market or put them into service under their own name or trademark
- Prospective providers: entities developing high-risk AI systems before the provider role is formalised at market placement
- Deployers acting as providers: where a deployer makes substantial modifications (Art.25(1)), they assume provider obligations and gain Art.58 access
- Importers/distributors: no direct Art.58 access unless they become providers under Art.25
What "High-Risk" Means for Art.58
Art.58 testing access is limited to high-risk AI systems under:
- Annex I: AI systems that are safety components of products covered by Union harmonisation legislation (medical devices, machinery, vehicles, etc.)
- Annex III: standalone high-risk AI systems across eight categories (biometric, critical infrastructure, education, employment, essential services, law enforcement, migration/asylum, justice)
Non-high-risk AI systems and GPAI models do not have Art.58 testing access (they may use Art.57 sandboxes where eligible).
The Testing Condition
Art.58(1) establishes that real-world testing can occur in conditions other than the AI regulatory sandbox referred to in Art.57, meaning in actual operational environments — with real users, real data, real decisions — but with the safeguard framework of Art.58(3)–(5) applied.
Art.58(2): The Real-World Testing Plan — Mandatory Content
Why the Plan Is the Legal Instrument
Art.58(2) makes the Real-World Testing Plan the central legal document for Art.58 testing. Unlike Art.57 sandbox participation (which operates under an authority-agreed sandbox plan), the Art.58 plan is unilaterally prepared by the provider and submitted to the authority. The authority reviews it and may object; if no objection within 30 days, testing may proceed.
Mandatory Content Requirements
Art.58(2) specifies the minimum content of the Real-World Testing Plan:
| Element | Required Content | Legal Significance |
|---|---|---|
| AI system description | Technical description sufficient to assess risk | Enables authority review |
| Testing objective | What the testing is designed to validate | Defines scope for 30-day review |
| Testing conditions | Environments, duration, geographic scope | Defines where/when testing occurs |
| Subject group specification | Who will be subject to or affected by testing | Triggers safeguard obligations |
| Safeguard description | How Art.58(5) obligations will be met | Authority cannot waive these |
| Risk management plan | Art.9-aligned risk mitigation for testing period | Proportionate to testing risk |
| Data protection plan | GDPR compliance for testing data | GDPR applies in full |
| Termination conditions | What circumstances trigger testing halt | Internal suspension standards |
Plan vs. Full Technical Documentation
The Real-World Testing Plan is not the full Annex IV technical documentation required before market placement. It is a targeted document covering the testing period only. The full compliance documentation — technical file, conformity assessment, declaration of conformity — must be completed before the AI system is placed on the market, even after successful Art.58 testing.
from dataclasses import dataclass, field
from datetime import date, timedelta
from typing import Optional
from enum import Enum
class AnnexIIICategory(Enum):
BIOMETRIC = "biometric_identification"
CRITICAL_INFRA = "critical_infrastructure"
EDUCATION = "education_vocational"
EMPLOYMENT = "employment_hr"
ESSENTIAL_SERVICES = "essential_services"
LAW_ENFORCEMENT = "law_enforcement"
MIGRATION_ASYLUM = "migration_asylum"
JUSTICE = "justice_democratic"
@dataclass
class RealWorldTestingPlan:
"""Art.58(2) Real-World Testing Plan implementation."""
# Identification
provider_name: str
ai_system_name: str
ai_system_version: str
annex_category: AnnexIIICategory
# Testing parameters
testing_objective: str
testing_conditions: str
geographic_scope: list[str] # Member State codes, e.g. ["DE", "FR"]
start_date: date
duration_days: int # Max 180 days per Art.58(6)
# Subject group
subject_group_description: str
estimated_subject_count: int
includes_vulnerable_groups: bool
vulnerable_group_protections: list[str] = field(default_factory=list)
# Safeguards
consent_mechanism: str # How informed consent will be obtained
withdrawal_mechanism: str # How subjects can opt out
transparency_measures: str # What subjects are told about testing
# Risk management
risk_management_summary: str
termination_conditions: list[str] = field(default_factory=list)
# Data protection
data_protection_measures: str
personal_data_categories: list[str] = field(default_factory=list)
# Multi-jurisdiction (Art.58(7))
multi_jurisdiction: bool = False
lead_authority_member_state: Optional[str] = None
def end_date(self) -> date:
return self.start_date + timedelta(days=self.duration_days)
def validate_duration(self) -> bool:
"""Art.58(6): testing cannot exceed 6 months (180 days) per phase."""
return self.duration_days <= 180
def requires_lead_authority(self) -> bool:
"""Art.58(7): multi-jurisdiction testing requires lead authority."""
return self.multi_jurisdiction and len(self.geographic_scope) > 1
def authority_objection_deadline(self, submission_date: date) -> date:
"""Art.58(3): 30-day implicit consent window from submission."""
return submission_date + timedelta(days=30)
def can_commence_testing(self, submission_date: date, today: date,
authority_approved: bool = False,
authority_objected: bool = False) -> tuple[bool, str]:
"""Check if testing can begin under Art.58(3)-(4)."""
if authority_approved:
return True, "Authority explicitly approved testing plan."
if authority_objected:
return False, "Authority objected. Testing blocked pending plan revision."
deadline = self.authority_objection_deadline(submission_date)
if today >= deadline:
return True, f"30-day objection window passed {deadline.isoformat()}. Implicit consent active."
days_remaining = (deadline - today).days
return False, f"Objection window not yet elapsed. {days_remaining} days remaining until {deadline.isoformat()}."
def to_submission_summary(self) -> str:
return (
f"Real-World Testing Plan — {self.ai_system_name} v{self.ai_system_version}\n"
f"Provider: {self.provider_name}\n"
f"Category: {self.annex_category.value}\n"
f"Objective: {self.testing_objective}\n"
f"Duration: {self.duration_days} days ({self.start_date} → {self.end_date()})\n"
f"Geographic scope: {', '.join(self.geographic_scope)}\n"
f"Subject group: {self.subject_group_description} (~{self.estimated_subject_count})\n"
f"Vulnerable groups: {'Yes — ' + '; '.join(self.vulnerable_group_protections) if self.includes_vulnerable_groups else 'No'}\n"
f"Multi-jurisdiction: {'Yes — Lead: ' + str(self.lead_authority_member_state) if self.multi_jurisdiction else 'No'}\n"
f"Termination conditions: {len(self.termination_conditions)} defined"
)
Art.58(3): The 30-Day Implicit Consent Mechanism
How Implicit Consent Works
Art.58(3) establishes the notification-first, implicit consent model. The provider submits the Real-World Testing Plan to the relevant market surveillance authority. The authority has 30 days to:
- Raise an objection to the testing plan
- Request modifications or additional safeguards
- Approve the plan explicitly (earlier than 30 days)
If the authority neither objects nor requests changes within 30 days: testing may commence without explicit authority approval.
Why This Model Matters for Development Velocity
The 30-day implicit consent mechanism is a deliberate policy choice to avoid blocking AI innovation with slow regulatory processes. Compared to sandbox applications (which may take months to process), Art.58 allows:
| Metric | Art.57 Sandbox | Art.58 Real-World Testing |
|---|---|---|
| Application processing time | Months (authority-determined) | 30 days maximum wait |
| Approval requirement | Explicit authority agreement | Implicit after 30 days |
| Ongoing supervision | Authority-supervised | Provider-managed with notification |
| Extension process | New sandbox plan or renewal | 6-month extension notification |
| Eligibility | Subject to authority capacity | Available to all eligible providers |
What Happens If the Authority Objects
If the authority objects within 30 days:
- Testing must not commence until the objection is resolved
- The authority must specify what is insufficient in the plan
- The provider can submit a revised plan — which triggers a new 30-day window
- Authorities retain discretion to object to any revised plan that does not adequately address identified concerns
Art.58(4): Geographic Submission Requirements
Art.58(4) specifies where to submit the Real-World Testing Plan:
- Single Member State testing: submit to the market surveillance authority of that Member State
- Multi-Member State testing: submit using the single submission procedure under Art.58(7)
The market surveillance authority for Art.58 purposes is the authority designated under Art.70(1) for the relevant product category or sector. For Annex I AI systems embedded in regulated products, this may be a product safety authority; for Annex III systems, it is typically the nationally designated AI supervisory authority.
@dataclass
class AuthoritySubmission:
"""Track Art.58(3)-(4) authority submission and response."""
plan: RealWorldTestingPlan
submission_date: date
authority_name: str
authority_member_state: str
reference_number: Optional[str] = None
# Authority response tracking
authority_response_received: bool = False
authority_approved: bool = False
authority_objected: bool = False
objection_details: Optional[str] = None
revision_required: bool = False
def status(self, today: date) -> str:
if self.authority_objected:
return f"OBJECTED: {self.objection_details or 'No details provided'}"
if self.authority_approved:
return "APPROVED: Explicit authority approval received"
deadline = self.plan.authority_objection_deadline(self.submission_date)
if today >= deadline:
return f"IMPLICIT CONSENT: 30-day window expired {deadline.isoformat()}"
return f"PENDING: {(deadline - today).days} days until implicit consent ({deadline.isoformat()})"
def testing_authorised(self, today: date) -> bool:
can_commence, _ = self.plan.can_commence_testing(
self.submission_date, today, self.authority_approved, self.authority_objected
)
return can_commence
Art.58(5): Safeguards for Testing Subjects
Art.58(5) is the subject protection core of the real-world testing regime. These safeguards are non-negotiable — the provider cannot waive them and the authority cannot grant exceptions.
Art.58(5)(a): Informed Consent
Subjects of real-world testing must provide prior informed consent before being included in testing. The consent must be:
| Consent Requirement | Specification |
|---|---|
| Prior | Obtained before testing begins, not retroactively |
| Informed | Subject understands they are participating in AI system testing |
| Purpose-specific | Subject is told what the AI system tests and what data is used |
| Voluntary | No coercion, incentivisation that undermines voluntariness, or pressure |
| Documented | Written record of consent must be maintained |
| Age-appropriate | Parental/guardian consent required for minors |
Exception for anonymised or aggregated testing: where the AI system is tested in a way that does not affect identifiable individuals (e.g., testing on anonymised datasets in a real operational environment), consent requirements may not apply. This is the provider's legal assessment to make, documented in the testing plan.
Art.58(5)(b): Right to Withdraw Without Adverse Consequences
Testing subjects have an absolute right to withdraw from testing at any time without:
- Material disadvantage
- Loss of services they are entitled to
- Employment consequences
- Any other adverse treatment
This obligation is closely aligned with GDPR Art.7(3) (right to withdraw consent) but extends beyond data processing to participation in the AI system testing itself.
Operational requirement: the withdrawal mechanism must be as accessible as the consent mechanism. If consent was given digitally, withdrawal must be possible digitally without barriers.
Art.58(5)(c): Vulnerable Group Protections
Art.58(5)(c) imposes enhanced safeguards for vulnerable groups including:
- Children and minors
- Persons with disabilities
- Elderly persons
- Other groups that may be disproportionately affected by the tested AI system
Enhanced safeguards include:
- Appropriate representation in consent processes (guardians, advocates)
- Enhanced monitoring of testing effects on vulnerable subjects
- Priority intervention if testing causes or threatens harm to vulnerable subjects
- Separate vulnerability impact assessment in the testing plan
Art.58(5)(d): Transparency to Test Subjects
Subjects must be informed in a clear, plain-language manner that they are participating in testing of an AI system. The transparency obligation requires disclosure of:
- The identity of the provider conducting testing
- The nature of the AI system being tested
- The purpose of the testing
- What data is collected about the subject
- How the data will be used and protected
- The subject's rights (including withdrawal under Art.58(5)(b))
Timing: transparency must be provided before testing begins. Post-hoc disclosure (deceptive testing) is not permitted under Art.58.
@dataclass
class TestingSubjectConsent:
"""Art.58(5)(a)-(d) consent and safeguard tracking per subject."""
subject_id: str # Pseudonymised identifier
consent_date: date
consent_method: str # "written_form", "digital_portal", "witnessed_verbal"
# Informed consent elements documented
informed_of_testing_purpose: bool
informed_of_data_use: bool
informed_of_provider_identity: bool
informed_of_withdrawal_rights: bool
# Vulnerable group status
is_minor: bool = False
guardian_consent_obtained: bool = False
has_disability: bool = False
is_elderly: bool = False
enhanced_safeguards_applied: bool = False
# Withdrawal tracking
withdrawal_date: Optional[date] = None
withdrawal_processed: bool = False
adverse_consequences_check: bool = False # Verified no adverse consequences
def is_valid_consent(self) -> tuple[bool, list[str]]:
issues = []
if not self.informed_of_testing_purpose:
issues.append("Subject not informed of testing purpose")
if not self.informed_of_data_use:
issues.append("Subject not informed of data use")
if not self.informed_of_provider_identity:
issues.append("Subject not informed of provider identity")
if not self.informed_of_withdrawal_rights:
issues.append("Subject not informed of withdrawal rights")
if self.is_minor and not self.guardian_consent_obtained:
issues.append("Minor: guardian consent not obtained")
if (self.has_disability or self.is_elderly) and not self.enhanced_safeguards_applied:
issues.append("Vulnerable group: enhanced safeguards not applied")
return len(issues) == 0, issues
def has_withdrawn(self) -> bool:
return self.withdrawal_date is not None
def withdrawal_processed_properly(self) -> bool:
if not self.has_withdrawn():
return True # N/A
return self.withdrawal_processed and self.adverse_consequences_check
class TestingSubjectConsentManager:
"""Manage consent records for all Art.58 testing subjects."""
def __init__(self, plan: RealWorldTestingPlan):
self.plan = plan
self.subjects: dict[str, TestingSubjectConsent] = {}
def add_subject(self, consent: TestingSubjectConsent) -> None:
valid, issues = consent.is_valid_consent()
if not valid:
raise ValueError(f"Invalid consent for subject {consent.subject_id}: {issues}")
self.subjects[consent.subject_id] = consent
def process_withdrawal(self, subject_id: str, withdrawal_date: date) -> None:
if subject_id not in self.subjects:
raise KeyError(f"Subject {subject_id} not found in consent records")
subject = self.subjects[subject_id]
subject.withdrawal_date = withdrawal_date
subject.withdrawal_processed = True
subject.adverse_consequences_check = True
def get_active_subjects(self) -> list[TestingSubjectConsent]:
return [s for s in self.subjects.values() if not s.has_withdrawn()]
def get_consent_audit_summary(self) -> dict:
total = len(self.subjects)
withdrawn = sum(1 for s in self.subjects.values() if s.has_withdrawn())
vulnerable = sum(1 for s in self.subjects.values() if
s.is_minor or s.has_disability or s.is_elderly)
invalid_consents = sum(1 for s in self.subjects.values()
if not s.is_valid_consent()[0])
return {
"total_subjects": total,
"active_subjects": total - withdrawn,
"withdrawn_subjects": withdrawn,
"vulnerable_group_subjects": vulnerable,
"invalid_consent_records": invalid_consents,
"consent_compliance": invalid_consents == 0
}
Art.58(6): Duration Limits and Extension
Maximum Testing Period
Art.58(6) limits real-world testing to a maximum of 6 months (approximately 180 days) per testing phase. Testing may be extended once for an additional 6 months, giving a total maximum testing period of 12 months for a single Real-World Testing Plan.
| Phase | Duration | Process |
|---|---|---|
| Initial testing | Up to 6 months | Submit plan → 30-day window → test |
| Extension | Additional 6 months | Submit extension plan → new 30-day window |
| Total maximum | 12 months | No further extension possible |
Extension Requirements
To extend testing beyond the initial 6-month period:
- Submit a revised Real-World Testing Plan to the relevant authority
- The revised plan must justify the extension and specify any changes to testing conditions or safeguards
- The new 30-day implicit consent window applies to the extension plan
- Testing continues uninterrupted unless the authority objects to the extension
Developer implication: plan the testing timeline with the 12-month ceiling in mind. If the validation requirements cannot be met within 12 months, the system may need to enter the sandbox regime (Art.57) which has no fixed duration limit, or proceed to market placement with the evidence available.
@dataclass
class TestingPhase:
"""Track Art.58(6) testing duration compliance."""
phase_number: int # 1 = initial, 2 = extension
submission: AuthoritySubmission
actual_start_date: Optional[date] = None
actual_end_date: Optional[date] = None
terminated_early: bool = False
termination_reason: Optional[str] = None
MAX_PHASE_DAYS = 180
MAX_PHASES = 2 # Art.58(6): one initial + one extension
def is_duration_compliant(self) -> bool:
return self.submission.plan.duration_days <= self.MAX_PHASE_DAYS
def is_still_active(self, today: date) -> bool:
if self.terminated_early:
return False
if not self.actual_start_date:
return False
if self.actual_end_date and today > self.actual_end_date:
return False
return True
class RealWorldTestingTracker:
"""Full lifecycle tracking for Art.58 real-world testing."""
def __init__(self, plan: RealWorldTestingPlan):
self.plan = plan
self.phases: list[TestingPhase] = []
self.consent_manager = TestingSubjectConsentManager(plan)
self.suspension_events: list[dict] = []
def add_phase(self, phase: TestingPhase) -> None:
if len(self.phases) >= TestingPhase.MAX_PHASES:
raise ValueError("Art.58(6): maximum 2 testing phases (initial + one extension)")
if not phase.is_duration_compliant():
raise ValueError(f"Art.58(6): phase duration {phase.submission.plan.duration_days} days exceeds 180-day limit")
self.phases.append(phase)
def total_planned_duration(self) -> int:
return sum(p.submission.plan.duration_days for p in self.phases)
def is_duration_compliant(self) -> bool:
return self.total_planned_duration() <= 360 # 2 × 180 days
def record_suspension(self, date: date, reason: str, authority_ordered: bool) -> None:
self.suspension_events.append({
"date": date.isoformat(),
"reason": reason,
"authority_ordered": authority_ordered
})
def compliance_report(self, today: date) -> dict:
active_phases = [p for p in self.phases if p.is_still_active(today)]
return {
"plan": self.plan.ai_system_name,
"total_phases": len(self.phases),
"active_phases": len(active_phases),
"total_planned_duration_days": self.total_planned_duration(),
"duration_compliant": self.is_duration_compliant(),
"consent_summary": self.consent_manager.get_consent_audit_summary(),
"suspension_events": len(self.suspension_events),
"testing_status": "ACTIVE" if active_phases else "INACTIVE"
}
Art.58(7): Multi-Jurisdiction Testing
The Single Submission Procedure
When testing is planned across multiple Member States, Art.58(7) provides a single submission procedure to avoid duplicating notifications across each national authority:
- The provider identifies a lead Member State authority (typically where testing begins or where the provider is established)
- A single Real-World Testing Plan covering all Member States is submitted to the lead authority
- The lead authority coordinates with the relevant authorities in other involved Member States
- The 30-day objection window runs from submission to the lead authority
Practical Multi-Jurisdiction Architecture
| Element | Single MS Testing | Multi-MS Testing (Art.58(7)) |
|---|---|---|
| Submission target | National MSA | Lead authority (provider-chosen) |
| Coordination | Not required | Lead authority coordinates with other MSAs |
| Objection | Lead authority | Lead authority consolidates objections from all involved MSAs |
| Safeguards | Per-country legal variations | Highest common denominator across all MSs involved |
| Data transfers | Domestic | GDPR Chapter V for cross-border data flows |
Developer implication: for products testing in multiple EU markets simultaneously (e.g., Germany + France + Poland), the Art.58(7) single submission procedure avoids 30-day objection windows in each jurisdiction. Choose the lead authority carefully — pick the authority in the Member State with the most developed AI supervisory capacity and where your entity is established.
National Law Variations
Art.58(7) does not fully harmonise testing conditions across Member States. Providers must check:
- Special data categories: Member State derogations under GDPR Art.9(4) may impose additional restrictions
- Sector-specific rules: financial services, healthcare, and law enforcement may have national regulatory requirements that Art.58 does not preempt
- Vulnerable group protections: some Member States have stricter requirements for testing involving children or persons with disabilities
Art.58(8): Authority Suspension Powers
Art.58(8) preserves authority powers to suspend or halt testing at any time if:
- A risk to health, safety, or fundamental rights is identified in the testing
- The provider is not complying with Art.58(5) safeguards (consent, withdrawal, transparency)
- The testing exceeds the scope of the approved Real-World Testing Plan
- The provider does not cooperate with authority oversight
Suspension vs. Termination
| Action | Trigger | Provider Response |
|---|---|---|
| Suspension | Risk identified — remediation possible | Stop testing, implement mitigation, notify authority, request resumption |
| Termination | Risk not remediable OR safeguard violations | Testing permanently halted; full authority investigation |
| Modification requirement | Plan gaps identified | Revise plan within authority-specified timeline |
Internal Suspension Obligations
Before the authority acts, the provider should have internal suspension triggers defined in the Real-World Testing Plan (Art.58(2) termination conditions). Internal suspension should be triggered by:
- Any safety incident involving a testing subject
- A fundamental rights harm identified in testing results
- Technical failures that could cause unplanned subject impact
- Discovery that the testing population includes subjects who did not provide informed consent
@dataclass
class SuspensionEvent:
"""Track Art.58(8) suspension or termination events."""
event_type: str # "internal_suspension", "authority_suspension", "termination"
trigger_date: date
trigger_description: str
authority_notified: bool
notification_date: Optional[date] = None
# Resumption tracking
remediation_completed: bool = False
authority_approved_resumption: bool = False
resumption_date: Optional[date] = None
def days_suspended(self, today: date) -> int:
end = self.resumption_date or today
return (end - self.trigger_date).days
def is_resolved(self) -> bool:
if self.event_type == "termination":
return False # Termination is permanent
return self.remediation_completed and self.authority_approved_resumption
Art.58 × GDPR Intersection
Data Protection as a Parallel Obligation
Art.58 does not modify GDPR — it operates alongside it. Real-world testing necessarily involves processing personal data about testing subjects, and GDPR applies in full:
| GDPR Obligation | Art.58 Testing Context |
|---|---|
| Lawful basis (Art.6) | Explicit consent (Art.6(1)(a)) or legitimate interests (Art.6(1)(f) — document carefully) |
| Special category data (Art.9) | Explicit consent required; Art.58(5)(a) consent may be insufficient alone |
| Data subject rights (Art.15–22) | Access, erasure, portability rights run in parallel with testing |
| Data minimisation (Art.5(1)(c)) | Testing plan should specify minimum data collection |
| Purpose limitation (Art.5(1)(b)) | Testing data cannot be repurposed beyond the testing plan objective |
| DPO consultation | Required where processing is large-scale, involves vulnerable groups, or uses systematic profiling |
| DPIA (Art.35) | Mandatory for systematic profiling, large-scale special category processing, or public surveillance |
DPIA Threshold Analysis for Art.58 Testing
| Testing Characteristic | DPIA Required? |
|---|---|
| Testing biometric AI on identified subjects | Yes — systematic profiling |
| Testing healthcare AI with patient health data | Yes — large-scale special category |
| Testing recruitment AI on job applicants | Yes — systematic employment profiling |
| Testing critical infrastructure AI (anonymised data) | Analyse — depends on re-identification risk |
| Testing education AI on minors | Yes — vulnerable group + profiling |
| Limited-scope testing with aggregated metrics only | Likely not — no individual profiling |
Art.58 × Art.57: Choosing the Right Regime
Decision Framework
| Factor | Art.57 Sandbox | Art.58 Real-World Testing |
|---|---|---|
| Authority involvement | High — supervised partnership | Low — notification only |
| Regulatory safe harbour | Yes — formal sandbox protection | Partial — compliance with Art.58 ≠ compliance with full Act |
| Timeline | Months (application + approval) | 30 days maximum |
| Pre-market stage | Early development, training, validation | Near-market validation |
| SME support | Priority access, authority guidance | No dedicated SME support |
| Personal data rules | Art.57(10) special provision | GDPR in full |
| Liability | Continues under applicable law | Continues under applicable law |
| Extension | Authority-negotiated | Maximum 12 months total |
Rule of thumb:
- Use Art.57 sandbox when your system is in active development and you want regulatory guidance and reduced compliance burden during development
- Use Art.58 when your system is near market-ready and you need real-world validation data before the final conformity assessment
Can Art.57 and Art.58 Be Used Sequentially?
Yes. A provider can:
- Join an Art.57 sandbox for supervised development (no fixed end date)
- Exit the sandbox when development is complete
- Conduct Art.58 real-world testing to gather market-representative validation data
- Complete the Annex IV technical file, conformity assessment, and DoC
- Register in the EU AI Database (Art.71) and place on the market
This sequential approach maximises the innovation support regime before taking on the full compliance burden.
CLOUD Act × Art.58 Testing Data: Jurisdiction Risk
Real-world testing generates sensitive data: AI system performance data, subject behaviour data, model outputs, consent records, and risk assessment documentation. If any of this data is stored on US-headquartered cloud infrastructure (AWS, Azure, GCP, Oracle Cloud), the US CLOUD Act (18 U.S.C. § 2713) may create compellability risk.
The Art.58 CLOUD Act Risk Surface
| Data Type | Art.58 Testing Context | CLOUD Act Risk |
|---|---|---|
| Consent records | Subject consent forms and withdrawal records | High — directly identifies subjects |
| Testing results | Model output data linked to test subjects | High — may contain personal data |
| Authority correspondence | Plan submission, objection handling | Medium — regulatory communications |
| Risk assessment documentation | Internal testing risk analysis | Medium — trade secret + personal data overlap |
| Subject interaction logs | How subjects interacted with tested AI | High — behavioural data |
Mitigation Architecture
class Art58DataJurisdictionAssessment:
"""Evaluate CLOUD Act compellability risk for Art.58 testing data."""
CLOUD_ACT_RISK_FACTORS = {
"us_cloud_infrastructure": True, # AWS, Azure, GCP, Oracle
"consent_records_on_us_servers": True,
"subject_personal_data_on_us_servers": True,
"authority_correspondence_on_us_servers": True,
}
def __init__(self, infrastructure: dict[str, str]):
"""infrastructure: {"data_type": "hosting_provider"}"""
self.infrastructure = infrastructure
self.us_providers = {"aws", "azure", "gcp", "oracle_cloud"}
def assess_risk(self) -> dict:
high_risk_categories = []
for data_type, provider in self.infrastructure.items():
if any(us in provider.lower() for us in self.us_providers):
high_risk_categories.append(data_type)
return {
"high_risk_data_categories": high_risk_categories,
"cloud_act_exposure": len(high_risk_categories) > 0,
"recommendation": (
"Migrate consent records and subject data to EU-sovereign infrastructure "
"to eliminate CLOUD Act compellability risk for Art.58 testing data."
if high_risk_categories else
"No CLOUD Act exposure detected for current infrastructure configuration."
),
"preferred_infrastructure": "EU-sovereign PaaS (e.g., sota.io) for testing data storage"
}
Key recommendation: store Art.58 testing data — especially consent records, subject interaction logs, and authority correspondence — exclusively on EU-sovereign infrastructure where no US parent company can be compelled under the CLOUD Act. Using EU-native infrastructure also simplifies your GDPR data transfer compliance (no Chapter V adequacy or SCC analysis needed).
Art.58 Compliance Checklist (40 Items)
Plan Preparation (Before Submission)
- 1. Confirmed AI system qualifies as high-risk under Annex I or Annex III
- 2. Provider or prospective provider status established and documented
- 3. Real-World Testing Plan prepared with all Art.58(2) mandatory content
- 4. Testing objective clearly defined and tied to specific validation requirements
- 5. Geographic scope confirmed — single or multi-Member State
- 6. Duration planned within 6-month (180-day) initial phase limit
- 7. Subject group identified with estimated participant count
- 8. Vulnerable group assessment completed
- 9. Consent mechanism designed for all subject categories (including minors)
- 10. Withdrawal mechanism designed — accessible, no-barrier opt-out
Submission and Authority Interaction
- 11. Correct market surveillance authority identified for each Member State
- 12. Multi-jurisdiction plan submitted to lead authority under Art.58(7) (if applicable)
- 13. Submission confirmation and reference number obtained
- 14. 30-day objection deadline tracked in compliance calendar
- 15. Testing commencement date set after objection window (unless explicit approval)
- 16. Authority objection monitoring process established
- 17. Plan revision process defined for objection scenarios
- 18. Extension plan prepared (if 6-month extension anticipated)
Subject Safeguards (Art.58(5))
- 19. Informed consent obtained before any testing subject inclusion
- 20. Consent records document all four transparency elements (purpose, data use, provider identity, withdrawal rights)
- 21. Withdrawal mechanism tested and verified as accessible
- 22. Non-adverse-consequences policy documented and enforced
- 23. Vulnerable group protections applied: minors, disabled, elderly
- 24. Guardian/advocate consent process for minors and incapacitated persons
- 25. Enhanced monitoring for vulnerable subject groups in place
- 26. Plain-language transparency materials prepared in relevant Member State languages
Data Protection (GDPR × Art.58)
- 27. Lawful basis for processing identified and documented for each data category
- 28. Special category data (Art.9) explicit consent obtained where applicable
- 29. DPIA conducted where required (profiling, large-scale, vulnerable groups)
- 30. Data minimisation principle applied — only necessary data collected
- 31. Testing data purpose limitation — data not repurposed beyond testing objective
- 32. Subject rights process (access, erasure, portability) operational during testing
- 33. Withdrawal → data erasure link tested (consent withdrawal triggers GDPR Art.17)
- 34. Cross-border data transfer compliance (if multi-MS or third-country transfers)
Risk Management and Suspension
- 35. Internal suspension triggers defined and documented in testing plan
- 36. Suspension-ready technical architecture — can testing stop within hours?
- 37. Risk monitoring system active during testing with real-time alerting
- 38. Authority notification process for internal suspension events
- 39. Testing results documentation system — timestamps, outputs, incidents
Infrastructure and Post-Testing
- 40. Testing data stored on EU-sovereign infrastructure (CLOUD Act risk eliminated)
Art.58 × Full EU AI Act Compliance Chain
Art.58 testing is a pre-compliance validation tool — successful testing does not replace the full compliance pathway. After testing:
| Step | Obligation | Article |
|---|---|---|
| Technical documentation | Annex IV technical file | Art.11 |
| Risk management system | Full Art.9 documentation | Art.9 |
| Data governance | Full Art.10 compliance | Art.10 |
| Logging and monitoring | Technical implementation | Art.12 |
| Transparency to deployers | Instructions for use | Art.13 |
| Conformity assessment | Third-party (Annex I systems) or self-assessment | Art.43 |
| Declaration of conformity | Art.48 DoC signed | Art.48 |
| CE marking | Applied before market placement | Art.49 |
| EU AI Database registration | EUAIDB registration | Art.71 |
Real-world testing results under Art.58 will be incorporated into the technical documentation — they are evidence for the conformity assessment, not a replacement for it.
Key Takeaways for Developers
- Art.58 is faster than Art.57: 30-day implicit consent vs. months of sandbox processing — but provides less regulatory support
- The 6-month limit is strict: plan your validation timeline before submission; the clock runs from testing start
- Informed consent is non-negotiable: no Art.58 testing without prior, documented, informed consent for each subject
- Vulnerable groups require extra work: plan minors/disabled/elderly protections before submitting the plan
- Authority can stop you at any time: build suspension-ready architecture and internal termination triggers
- GDPR runs in parallel: Art.58 does not modify GDPR — conduct your DPIA before testing starts
- Testing ≠ compliance: Art.58 validates performance; full Annex IV documentation, conformity assessment, and registration are still required before market placement
- CLOUD Act risk is real: store consent records and subject data on EU-sovereign infrastructure to avoid US compellability exposure
This guide covers Art.58 as of the EU AI Act Regulation (EU) 2024/1689 applicable from 2 August 2025. Consult your legal counsel and relevant national market surveillance authority for jurisdiction-specific implementation guidance.
See Also
- EU AI Act Art.57: AI Regulatory Sandboxes — Developer Guide — Art.57 establishes the formal sandbox framework that Art.58 testing operates alongside; sandbox-based testing has fewer consent requirements but longer setup times
- EU AI Act Art.59: Personal Data Processing in AI Development Sandboxes — Developer Guide — Art.59 governs personal data handling inside Art.57 sandboxes; Art.58 real-world testing requires its own GDPR basis outside the sandbox
- EU AI Act Art.60: SME and Startup Measures for AI Innovation — Developer Guide — Art.60 gives SMEs priority access to sandboxes and simplified guidance; SMEs conducting Art.58 testing can combine both sets of measures
- EU AI Act Art.43: Conformity Assessment for High-Risk AI Systems — Developer Guide — Art.58 real-world testing generates performance evidence that feeds directly into the Art.43 conformity assessment procedure
- EU AI Act Art.9: Risk Management System for High-Risk AI — Developer Guide — risk data collected during Art.58 testing must be integrated into the Art.9 risk management system before market placement