EU AI Act Art.5 Prohibited AI Practices: Developer Guide (Fully Applicable from February 2025)
Article 5 of the EU AI Act (Regulation 2024/1689) contains the most immediately consequential provisions for developers: a flat prohibition on six categories of AI practices, with no grace period. While most of the AI Act's obligations (High-Risk AI conformity assessments, GPAI documentation, notification requirements) phase in through 2026–2027, Article 5 became fully applicable on 2 February 2025 — the first anniversary of the Act's entry into force.
If you build, deploy, or integrate AI systems that reach EU users, Article 5 is already law. The penalties are the highest in the entire regulation: up to €35,000,000 or 7% of total worldwide annual turnover, whichever is higher (Art. 99(1)).
This guide covers every prohibited practice, who it applies to, the legal edge cases (there are several), how to audit third-party AI APIs you use, and what EU-native infrastructure means for your legal exposure.
Why Article 5 Matters Right Now
The AI Act's full applicability timeline:
| Date | What Becomes Applicable |
|---|---|
| 02.02.2025 | Art.5 — Prohibited Practices (this guide) |
| 02.08.2025 | Art.6(2) — GPAI model obligations |
| 02.08.2026 | High-Risk AI obligations (Annex III/IV) |
| 02.08.2027 | AI systems in regulated products (Annex I) |
Article 5 is the only chapter that applied immediately in 2025, before any other obligation was operational. National market surveillance authorities (MSAs) in Germany (BNetzA), France (CNIL), the Netherlands (Autoriteit Persoonsgegevens), and others have been issuing guidance since Q4 2024. Enforcement actions are expected in 2025–2026.
Who Is Covered?
Article 5 applies to:
- Providers — entities that develop AI systems and place them on the EU market or put them into service in the EU (Art. 3(3))
- Deployers — entities that use AI systems under their own authority in the EU (Art. 3(4))
- Importers and distributors — entities that bring AI systems to the EU market without being the original developer
Critical point for SaaS developers: If you build a product that exposes prohibited AI capabilities — even via an external API (OpenAI, Google Cloud Vision, AWS Rekognition, etc.) — you are the deployer and the prohibition applies to you. The API provider bears separate obligations as provider; your integration does not transfer your liability away.
Territorial scope (Art. 2(1)): The prohibition applies regardless of where the AI system is established if:
- The output of the system is used in the EU
- Affected persons are located in the EU
A US-incorporated company with no EU office, using an API to process EU users' data, still faces Art.5 obligations.
The Six Prohibited Practices (Art. 5(1))
Art. 5(1)(a) — Subliminal Manipulation Below Conscious Perception
What is prohibited: AI systems that deploy subliminal techniques beyond a person's consciousness — or deliberately exploit subconscious mechanisms — to materially distort their behavior in a way that causes or is reasonably likely to cause significant harm.
Legislative intent: This targets AI systems that exploit cognitive biases, attention mechanisms, and psychological vulnerabilities without the user being able to consciously resist or recognize the influence. Examples from the legislative record include: ultrasonic audio manipulation, subliminal image embedding in video streams, AI-optimized "dark pattern" sequences that bypass conscious decision-making.
Developer scope: Dark patterns that are deliberately crafted using AI optimization to bypass users' rational agency — not merely persuasive UI design. The line is between persuasion (lawful) and subliminal manipulation (prohibited).
Edge case: AI-powered A/B testing that optimizes for conversion using psychological models is not automatically prohibited unless it specifically deploys subliminal techniques (below conscious awareness) AND causes significant harm. However, the combination of AI optimization + vulnerability exploitation (Art.5(1)(b)) can overlap.
# Risk assessment: Does your feature cross Art.5(1)(a)?
class Art5aRiskChecker:
"""
Check whether an AI feature may constitute subliminal manipulation.
Three-factor test from Recital 44.
"""
def assess(self, feature_description: str) -> dict:
factors = {
"below_conscious_awareness": None, # True = high risk
"exploits_subconscious_mechanism": None, # True = high risk
"causes_significant_harm": None, # True = mandatory factor
}
# Factor 1: Is the influence mechanism below conscious perception?
# Subliminal audio (<20Hz), subliminal visual (<16ms display),
# psychological priming without disclosure = below conscious awareness
# Factor 2: Does it exploit subconscious cognitive mechanisms?
# Fear of missing out engineering, loss aversion bias exploitation,
# parasocial relationship manipulation, dark pattern AI optimization
# Factor 3: Does it cause or is likely to cause significant harm?
# Financial harm, health decisions, relationship manipulation,
# voting behavior changes, employment decisions
# All three factors must be present for Art.5(1)(a) prohibition
return factors
Art. 5(1)(b) — Vulnerability Exploitation
What is prohibited: AI systems that exploit the specific vulnerabilities of a group of persons due to their age, disability, or social or economic situation to materially distort behavior, causing or likely causing significant harm.
Who is protected:
- Children (age vulnerability)
- Persons with mental health conditions (disability)
- Persons in financial distress (economic situation)
- Persons in precarious social situations
Practical examples from the legislative record:
- AI-targeted advertising for predatory loans directed at people with demonstrably poor credit histories
- AI-optimized social media engagement systems specifically tuned for children under 13
- AI debt collection systems that adapt messaging to maximize psychological pressure on financially distressed individuals
- Gaming mechanics optimized by AI to exploit gambling disorder patterns
Developer implications: If your ML pipeline segments users by vulnerability characteristics and then applies differential engagement or conversion strategies to those segments, you have a structural Art.5(1)(b) risk, even if the individual components are lawful in isolation.
class VulnerabilityExploitationAudit:
"""
Audit whether an AI recommendation/targeting system may violate Art.5(1)(b).
"""
PROTECTED_CHARACTERISTICS = [
"age_under_18",
"age_over_70",
"mental_health_condition_indicator",
"financial_distress_indicator", # e.g., credit score below threshold
"addiction_pattern_indicator", # e.g., gambling disorder signals
"social_isolation_indicator",
]
def check_targeting_pipeline(self, model_features: list, targeting_objective: str) -> dict:
"""
Returns risk assessment for a targeting pipeline.
High risk if:
1. Model features include protected vulnerability characteristics
2. Targeting objective increases engagement/conversion for vulnerable segments
3. Output causes or likely causes significant harm
"""
vulnerability_features_used = [
f for f in model_features
if any(pc in f.lower() for pc in self.PROTECTED_CHARACTERISTICS)
]
return {
"vulnerability_features_detected": vulnerability_features_used,
"risk_level": "HIGH" if vulnerability_features_used else "LOW",
"recommendation": (
"Legal review required — Art.5(1)(b) likely applies"
if vulnerability_features_used
else "No prohibited vulnerability targeting detected"
)
}
Art. 5(1)(c) — Social Scoring by Public Authorities
What is prohibited: AI systems used by public authorities (or on their behalf) to evaluate or classify natural persons based on their social behavior or known/predicted personal or personality characteristics, resulting in detrimental treatment in social contexts unrelated to the data origin, or disproportionate or unjustified treatment.
This is the "China social credit" prohibition. The EU AI Act explicitly targeted systems like the Chinese Social Credit System (SCS) that aggregate behavioral data across domains (financial, social, civic) to generate scores that then affect access to services, travel, or opportunities.
What is required:
- AI system used by public authority or on their behalf
- Evaluates or classifies natural persons based on social behavior or personal characteristics
- Results in detrimental or disproportionate/unjustified treatment
- Treatment occurs in unrelated social context OR is disproportionate to original data collection purpose
Private sector: Article 5(1)(c) only applies to public authorities. Private creditworthiness assessment by banks, insurance risk scoring, and employer background checks using AI are not covered by this specific prohibition — though they may be subject to GDPR profiling restrictions (Art. 22 GDPR), the AI Act's High-Risk AI requirements (Annex III, Category 6), and anti-discrimination law.
Developer implication: If you build AI infrastructure for government clients (SmartCity platforms, social services case management, public benefits eligibility systems), your architecture must not aggregate behavioral data across unrelated domains to produce composite person scores that inform access decisions.
class PublicAuthorityScoringChecker:
"""
Compliance check for AI systems deployed by or for public authorities.
Art.5(1)(c) social scoring prohibition.
"""
def __init__(self, client_type: str):
self.is_public_authority = client_type in [
"government", "municipality", "social_services",
"public_housing", "job_center", "police", "judiciary"
]
def check_scoring_system(self,
data_sources: list,
score_output: str,
downstream_decisions: list) -> dict:
if not self.is_public_authority:
return {"applies": False, "reason": "Art.5(1)(c) only covers public authorities"}
# Check for cross-domain data aggregation
data_domains = {
source.get("domain") for source in data_sources
}
cross_domain = len(data_domains) > 1
# Check for detrimental treatment in unrelated context
unrelated_treatment = any(
decision.get("context") not in data_domains
for decision in downstream_decisions
)
risk = "PROHIBITED" if (cross_domain and unrelated_treatment) else "REVIEW_NEEDED"
return {
"applies": True,
"risk": risk,
"cross_domain_aggregation": cross_domain,
"unrelated_context_treatment": unrelated_treatment,
}
Art. 5(1)(d) — Real-Time Remote Biometric Identification in Public Spaces
What is prohibited: Real-time remote biometric identification (RBI) systems used by law enforcement in publicly accessible spaces, except within narrowly defined exceptions.
What "real-time" means: The identification and use of the result occurs without meaningful delay — the system identifies a person in motion in public space while they are still present (or immediately after) in order to take action.
The exceptions (Art. 5(1)(d)(i-iii)): Real-time RBI is permitted for law enforcement only when:
- Targeted search for missing persons and victims of trafficking/sexual exploitation — identifying a specific named individual
- Imminent specific threat — preventing a real, current, or foreseeable terrorist attack or threat to life
- Prosecution of serious criminal offenses — locating a suspect of an offense punishable by ≥3 years deprivation of liberty (per Annex II of the Act)
Critical procedural requirements for exceptions:
- Prior judicial authorization required, except for urgent cases (where authorization must be sought without undue delay after deployment)
- Prior fundamental rights impact assessment
- Registration in the EU database for high-risk AI systems
- Notification to the relevant market surveillance authority
What is NOT covered by Art.5(1)(d):
- Post-hoc biometric identification (reviewing footage after the fact) — this is High-Risk AI under Annex III, not a flat prohibition
- Biometric identification not conducted in real-time
- Biometric verification (1:1 matching for authentication) vs. identification (1:N matching against unknown population)
- Private sector facial recognition — not covered by Art.5(1)(d), but may be subject to GDPR biometric data protections (Art. 9 GDPR)
Developer impact: If you build live facial recognition or real-time crowd identification capabilities and sell to law enforcement clients in the EU, Art.5(1)(d) directly governs your product's deployment. The exceptions are extremely narrow and require judicial authorization. Real-time CCTV analytics for law enforcement that identify individuals against watch lists are the core target.
Art. 5(1)(e) — Emotion Recognition at Work and Educational Institutions
What is prohibited: AI systems that infer the emotional state of natural persons in workplace or educational institution contexts.
Scope: This prohibition applies specifically to these two sensitive contexts. Emotion recognition AI in healthcare (for patient wellbeing monitoring), for personal use, or in entertainment contexts is not covered by this specific prohibition (though other GDPR obligations apply).
What emotion recognition means: AI systems that detect, classify, or infer emotional states (happiness, sadness, frustration, anxiety, engagement, boredom) from:
- Facial expression analysis
- Voice tone/sentiment analysis
- Body language interpretation
- Physiological signals (heart rate, galvanic skin response via wearables)
- Behavioral pattern inference (keyboard dynamics, mouse movement patterns)
Real-world examples that are now prohibited:
- Employee productivity monitoring that infers "engagement" or "stress" levels from webcam feeds
- HR screening tools that analyze video interviews for emotional indicators
- Student attention monitoring that uses webcam to detect when students are "engaged" vs. "distracted"
- Call center agent emotion monitoring to flag "frustrated" employees
- Automated exam supervision that uses emotional state inference to detect cheating indicators
The exception: Art. 5(1)(e) excludes AI systems that infer emotional states for medical/therapeutic purposes where the user specifically consents and the inference is medically indicated.
class EmotionRecognitionCompliance:
"""
Compliance checker for emotion recognition AI features.
Art.5(1)(e) prohibition applies in workplace and educational contexts.
"""
PROHIBITED_DEPLOYMENT_CONTEXTS = [
"workplace",
"office",
"remote_work_monitoring",
"employee_productivity",
"hr_screening",
"job_interview",
"classroom",
"online_education",
"exam_supervision",
"student_monitoring",
]
PERMITTED_CONTEXTS = [
"healthcare_clinical", # Medical therapeutic with consent
"personal_consumer", # Personal fitness/wellness apps
"entertainment", # Gaming, interactive media
"research", # Academic research with ethics approval
]
def check_deployment(self,
emotion_feature: str,
deployment_context: str,
has_medical_authorization: bool = False) -> dict:
is_prohibited_context = any(
ctx in deployment_context.lower()
for ctx in self.PROHIBITED_DEPLOYMENT_CONTEXTS
)
if is_prohibited_context and not has_medical_authorization:
return {
"status": "PROHIBITED",
"legal_basis": "EU AI Act Art.5(1)(e)",
"penalty_exposure": "Up to €35M or 7% global turnover",
"action_required": "Remove emotion inference capability from this deployment context",
}
return {
"status": "PERMITTED",
"context": deployment_context,
"note": "Verify GDPR Art.9 compliance if biometric data is processed",
}
Art. 5(1)(f) — AI-Based Criminal Risk Prediction Targeting Individuals
What is prohibited: AI systems used by law enforcement that assess or predict the risk of a natural person committing a criminal offense, where the assessment is based solely on profiling of a natural person or on assessing their personality traits and characteristics.
What this targets: Predictive policing systems that generate individual criminal risk scores — "this person has a 73% likelihood of committing a crime in the next 6 months" — based on demographic and behavioral profiling. These systems have been used in the US (PredPol, HunchLab, COMPAS for recidivism) and some EU jurisdictions (Palantir deployments in Germany, Netherlands, UK pre-Brexit).
The critical carve-out: The prohibition is on assessments based solely on profiling. Law enforcement AI systems that use profiling as one factor in a human-supervised investigation with additional evidence are not automatically prohibited. The word "solely" is the operative constraint — the system must be capable of outputting an individual risk score that is the primary or sole basis for a law enforcement decision.
Also prohibited under Art.5(1)(f): AI systems that predict future criminal behavior based on inferences from location, movement patterns, social network analysis, or behavioral data without any connection to actual specific criminal suspicion. Pre-crime style targeting of individuals who have not been connected to any specific crime is what the Article targets.
What remains lawful:
- Area-level predictive policing (predicting where crimes are likely to occur by geography/time, without individual targeting)
- AI systems that analyze specific criminal evidence to assist in identifying suspects after a crime has occurred
- AI-assisted forensic analysis of seized devices
- Facial recognition used post-hoc to identify a known suspect
The SaaS Platform Liability Question
The most important practical question for developers is: If I use a third-party AI API that provides prohibited capabilities, am I liable under Art.5?
The answer under the AI Act's framework is yes, as the deployer.
The Deployer Responsibility Chain
OpenAI / Google / AWS / Azure
↓ (Provider obligations: Art.13, GPAI Art.52-55)
Your SaaS Platform
↓ (Deployer obligations: Art.26, including Art.5 prohibition)
Your End Customer
↓ (End-user, not subject to AI Act directly)
EU User
When you integrate an AI API into a product that reaches EU users:
- You become a deployer (Art. 3(4)) of that AI system
- Art.26(1) requires deployers to use AI systems in accordance with instructions and "not use an AI system in a manner that results in a breach of applicable Union law"
- Art.5 prohibitions apply to you directly — you cannot argue that the API provider is responsible for how you deploy their capabilities
Concrete example: If you build a workflow automation tool that allows customers to connect OpenAI's emotion detection API to their HR software, and a customer uses it to monitor employee emotional states during work, your platform is the deployer. The Art.5(1)(e) prohibition applies to you, even if:
- You didn't build the emotion detection model
- Your customer chose to deploy the feature this way
- Your terms of service prohibit this use
The "Configuration Gateway" Risk
Many SaaS platforms offer AI features as configurable building blocks. Under Art.5, the prohibited practices prohibition flows through to how your platform is used, not just how you intend it to be used. This creates a structural risk for:
- No-code/low-code AI platforms that expose configurable AI pipelines
- AI model marketplaces that allow deployment of third-party models
- Analytics dashboards that can be configured to analyze employee behavioral data
- HR tech platforms that integrate third-party AI vendor plugins
Mitigation strategy: If your platform could be configured to implement a prohibited practice, you need a deployment gate — either technical controls that prevent prohibited configurations, contractual requirements prohibiting prohibited use, and ideally both.
Enforcement: Who, How, and When
National Market Surveillance Authorities
Each EU member state designates at least one national market surveillance authority (MSA) responsible for enforcing the AI Act within its territory. For Art.5 (prohibited practices affecting end-users in their territory):
- Germany: Bundesnetzagentur (BNetzA) + Federal Data Protection Commissioner (BfDI) for data-related violations
- France: CNIL (data protection aspects) + Secrétariat général de la défense et de la sécurité nationale for safety aspects
- Netherlands: Autoriteit Persoonsgegevens (AP)
- Ireland: (critical for US tech companies via DPC/AI Authority)
- Italy: Garante per la protezione dei dati personali
Penalties Under Art. 99(1)
Violations of Art.5 (prohibited practices) carry:
- Up to €35,000,000 or
- 7% of total worldwide annual turnover of the preceding financial year
whichever is higher. This is the highest penalty tier in the entire EU AI Act — higher than High-Risk AI violations (€15M/3%) and GPAI violations (€7.5M/1.5% for false information).
The Art.5 × GDPR Intersection
Art.5 violations frequently overlap with GDPR violations:
- Emotion recognition at work often involves biometric data → GDPR Art.9 special category data
- Social scoring typically involves profiling → GDPR Art.22
- Biometric identification in public spaces processes biometric data → GDPR Art.9(1) + Recital 51
When both AI Act Art.5 and GDPR are violated, both the national MSA and the national data protection authority can act, potentially leading to dual enforcement actions and cumulative penalties.
class Art5GDPRIntersectionChecker:
"""
Check for AI Act Art.5 × GDPR dual exposure.
"""
INTERSECTION_MAP = {
"art5_1a_subliminal": {
"gdpr_articles": ["Art.5(1)(a) (lawfulness)", "Art.6 (legal basis)"],
"dual_enforcement_risk": "HIGH",
},
"art5_1b_vulnerability": {
"gdpr_articles": ["Art.22 (profiling)", "Art.6 (legal basis)"],
"dual_enforcement_risk": "HIGH",
},
"art5_1c_social_scoring": {
"gdpr_articles": ["Art.22 (automated decisions)", "Art.5(1)(b) (purpose limitation)"],
"dual_enforcement_risk": "HIGH",
},
"art5_1d_biometric_rbi": {
"gdpr_articles": ["Art.9 (biometric data)", "Art.6 (legal basis)", "Art.35 (DPIA)"],
"dual_enforcement_risk": "CRITICAL",
},
"art5_1e_emotion_recognition": {
"gdpr_articles": ["Art.9 (biometric data if facial)", "Art.22 (profiling)", "Art.6"],
"dual_enforcement_risk": "CRITICAL",
},
"art5_1f_criminal_prediction": {
"gdpr_articles": ["Art.10 (criminal data)", "Art.22 (automated decisions)"],
"dual_enforcement_risk": "HIGH",
},
}
The sota.io Angle: EU-Native PaaS as Safe Harbor
The CLOUD Act Problem for Prohibited Practices Data
When prohibited practice violations occur, enforcement authorities will seek access to the data processed by the AI system — logs of emotion recognition inferences, biometric identification records, social scoring computations. If your infrastructure is US-based:
- CLOUD Act (18 U.S.C. § 2713) requires US cloud providers to disclose data to US law enforcement regardless of where the data is stored
- EU MSAs and data protection authorities conducting AI Act enforcement investigations may encounter US government parallel access to the same data
- The jurisdictional complexity creates discovery delays and attorney-client privilege complications
EU-native infrastructure (German/EU data centers, EU corporate entity, no US parent company) creates a single-jurisdiction enforcement environment — EU law governs exclusively, without the parallel US access problem.
Practical Safe Harbor Construction
For SaaS developers who want to be compliant by design:
- Inventory all AI capabilities in your product (first-party and third-party APIs)
- Map each capability against the six Art.5(1) prohibited categories
- Deploy technical gates that prevent prohibited configurations — especially for no-code/low-code platforms
- Update terms of service with explicit Art.5 use restrictions and customer indemnification
- Conduct an AI system inventory as required under Art.26(1) deployer obligations
- Choose infrastructure that doesn't create cross-jurisdictional evidence exposure
For high-risk sectors (HR tech, government tech, law enforcement tech), an EU-native deployment stack removes the CLOUD Act layer from the regulatory exposure surface.
Compliance Checklist
class Art5ComplianceChecklist:
"""
Art.5 compliance audit checklist for SaaS developers.
Run before shipping AI features to EU users.
"""
checklist = [
# Art.5(1)(a) — Subliminal manipulation
{
"id": "5a-1",
"check": "Does any AI feature operate below users' conscious awareness?",
"evidence_required": "Feature design documentation",
"risk_if_yes": "PROHIBITED unless no significant harm possible",
},
{
"id": "5a-2",
"check": "Is any AI optimization targeting subconscious cognitive biases?",
"evidence_required": "ML model feature importance, training objective",
"risk_if_yes": "Legal review required",
},
# Art.5(1)(b) — Vulnerability exploitation
{
"id": "5b-1",
"check": "Does any AI feature use vulnerability indicators (age, disability, financial distress) to increase engagement?",
"evidence_required": "Feature engineering documentation, targeting criteria",
"risk_if_yes": "PROHIBITED if significant harm likely",
},
# Art.5(1)(c) — Social scoring
{
"id": "5c-1",
"check": "Do you build AI for public authorities that aggregates behavioral data across domains?",
"evidence_required": "Data source inventory, client classification",
"risk_if_yes": "PROHIBITED if results in unrelated context detrimental treatment",
},
# Art.5(1)(d) — Real-time biometric identification
{
"id": "5d-1",
"check": "Does your product enable real-time identification of individuals in public spaces?",
"evidence_required": "Technical specification, client type (law enforcement?)",
"risk_if_yes": "PROHIBITED unless statutory exception with judicial authorization",
},
# Art.5(1)(e) — Emotion recognition
{
"id": "5e-1",
"check": "Does any AI feature infer emotional states of employees or students?",
"evidence_required": "Feature specification, deployment context",
"risk_if_yes": "PROHIBITED in workplace/educational contexts",
},
{
"id": "5e-2",
"check": "Can your platform be configured by customers to enable emotion recognition in workplace contexts?",
"evidence_required": "Platform capability inventory",
"risk_if_yes": "Deployer liability applies — implement configuration gates",
},
# Art.5(1)(f) — Criminal prediction
{
"id": "5f-1",
"check": "Do you build predictive policing AI that generates individual criminal risk scores?",
"evidence_required": "System design documentation, law enforcement client contracts",
"risk_if_yes": "PROHIBITED if based solely on profiling without specific suspicion",
},
]
def run_audit(self, responses: dict) -> dict:
findings = []
for item in self.checklist:
if responses.get(item["id"]) == "YES":
findings.append({
"check": item["check"],
"risk": item["risk_if_yes"],
"evidence_needed": item["evidence_required"],
})
return {
"total_checks": len(self.checklist),
"high_risk_findings": len(findings),
"findings": findings,
"recommendation": (
"Immediate legal review required — potential Art.5 violations"
if findings else "No immediate Art.5 risks identified — continue monitoring"
)
}
Timeline and What to Do Now
Immediate Actions (Already Required)
Article 5 became fully applicable on 2 February 2025. There is no transition period. If you have been shipping prohibited AI features to EU users since that date, you have been in violation.
This week:
- Run the compliance checklist above against your entire AI feature inventory
- Identify any third-party AI APIs you use that may enable prohibited practices
- Check your customer contracts for use cases that may involve prohibited deployments
- Brief your legal team on Art.5 obligations
This month:
- Implement technical deployment gates for any configurable AI features that could enable prohibited practices
- Update terms of service with Art.5 use restrictions
- Document your Art.5 compliance posture for the AI Act's transparency requirements
- If you serve law enforcement or government clients: conduct an Art.5(1)(c/d/f) specific audit
What National Authorities Will Look For
EU MSAs have published initial guidance indicating they will focus on:
- Employee monitoring AI (Art.5(1)(e) emotion recognition is the #1 expected enforcement target)
- Predictive law enforcement AI (Art.5(1)(f) and Art.5(1)(d) for biometric ID)
- Children-targeting AI (Art.5(1)(b) vulnerability exploitation of minors)
The first enforcement actions are expected by Q3–Q4 2025, with decisions and published outcomes by 2026.
See Also
- EU AI Act: GPAI Model Regulation Developer Guide (Art.51-56)
- EU AI Act: High-Risk AI Conformity Assessment Developer Guide
- EU AI Act: Regulatory Sandbox (Art.57-63) Developer Guide
- EU AI Liability Directive (AILD) + PLD 2024: Developer Guide
- EU NIS2 + AI Act: Double Compliance for Critical Infrastructure
sota.io is an EU-native PaaS — all compute, storage, and processing stays within the EU. No US parent company, no CLOUD Act exposure. For AI developers building EU-compliant products, single-jurisdiction infrastructure removes one layer of regulatory complexity.