2026-04-12·9 min read·sota.io team

EU AI Act Art.85: The Review Clause — What Developers Need to Know About the 2027 Regulatory Reset

Most EU AI Act compliance guides stop at Article 84. That is where Chapter IX enforcement ends — market surveillance, corrective actions, penalties, formal non-compliance. Article 85 is different: it is the built-in self-correction mechanism for the entire Regulation.

Understanding Article 85 matters for developers not because it changes your August 2026 obligations, but because it defines the regulatory window you are designing for. What gets reviewed in 2027? What might change? How does the Article 84 reporting data feed the Article 85 amendment process? And what compliance architecture decisions are most resilient across the amendment cycle?

What Article 85 Actually Says

Article 85(1) — The Three-Year Review (2027-08-02): The Commission must submit a report to the European Parliament and the Council by August 2, 2027 — three years after the Regulation's entry into force on August 2, 2024. This report evaluates:

Article 85(2) — The Four-Year Amendment Cycle: Following the initial 2027 report, the Commission must evaluate the Regulation every four years thereafter (next: 2031). Each cycle can generate legislative proposals to amend the Act.

Article 85(3) — Delegated Acts for Annex Updates: For Annex I (harmonised standards) and Annex III (high-risk categories), the Commission has authority to update via delegated acts — faster than full legislative amendment, without requiring full Parliament/Council procedure.

The Art.84 → Art.85 Data Pipeline

The Commission's 2027 review is not conducted in a vacuum. The primary evidence base is the enforcement data collected by national Market Surveillance Authorities (MSAs) under Article 84's annual reporting obligation.

Here is the data flow:

2026-08-02: Full enforcement begins
          ↓
2026-08-03 – 2027-07-31: MSAs accumulate:
  - Art.73 serious incident reports
  - Art.79-82 investigations initiated
  - Art.99 fines levied (Tier 1/2/3)
  - Corrective actions ordered
  - EUAIDB registration compliance rates
          ↓
2027 Q1: MSAs submit Art.84 annual reports to Commission
          ↓
2027-08-02: Commission submits Art.85 review report
           based on Art.84 data + market assessment

This means: the enforcement actions taken in the first 12 months of full compliance will directly shape the 2027 review recommendations. Sectors with high non-compliance rates are more likely to see tighter regulations. Sectors where compliance is smooth and incidents are rare may see relaxed Annex III scope.

For developers: your compliance posture in 2026-2027 is not just about avoiding fines — it is about shaping the regulatory environment for 2027-2031.

What Is Most Likely to Change in 2027

Article 5 Prohibited Practices

The Article 85 review of Art.5 is likely to focus on:

  1. Emotion recognition exceptions: Art.5(1)(e) currently permits emotion recognition in safety/criminal investigation contexts. If misuse is documented in MSA reports, this exception could be narrowed or removed.

  2. Biometric categorisation scope: Art.5(1)(g) bans biometric categorisation to infer sensitive attributes. As technical capabilities evolve, the definition of "biometric" may be clarified or expanded.

  3. Predictive policing definition: The line between prohibited AI-based criminal risk assessment and permitted statistical profiling tools is contested. 2027 review is likely to clarify this boundary.

Developer implication: If your system is near the edge of an Art.5 prohibition (e.g., uses workplace analytics that could be characterised as emotion inference), the 2027 review creates regulatory uncertainty for systems designed in 2025-2026. Designing with clear distance from Art.5 categories reduces your exposure to reclassification.

Annex III High-Risk Category Expansion

This is the highest-probability change. Annex III currently covers 8 categories with specific sub-categories. The review is likely to examine:

If Annex III expands, systems that are currently general-purpose AI (not high-risk) may become high-risk and require full conformity assessment, risk management systems, and technical documentation.

Developer implication: Document your current Annex III scope assessment. If the 2027 review reclassifies your product as high-risk, you will need compliance documentation showing: (1) what your 2026 classification was, (2) what Annex III basis you used, and (3) what interim measures you took when the reclassification occurred.

GPAI Systemic Risk Threshold

The current threshold for GPAI systemic risk classification is 10^25 FLOP (floating point operations used in training). This is codified in Article 51(2). The 2027 review will evaluate:

If the threshold is lowered, models currently below 10^25 FLOP become subject to systemic risk obligations (adversarial testing, incident reporting, cybersecurity frameworks under Article 53).

Developer implication: If you are building or fine-tuning models in the 10^23-10^25 FLOP range (increasingly accessible with 2026 hardware), monitor this threshold closely. The Art.85 review could shift it without warning.

The CLOUD Act × Art.85 Problem for Documentation

Here is a specific risk that emerges from the Art.84 → Art.85 pipeline:

Under Article 84, MSAs collect technical documentation from providers during investigations. This documentation — conformity assessments, risk management records, training data governance logs — is retained for 10 years under Article 11(3).

If that documentation is stored on US-incorporated cloud infrastructure:

EU-native storage eliminates this risk. Documentation stored on sota.io infrastructure — incorporated in Germany, outside CLOUD Act jurisdiction — is subject only to EU MSA access requests, not parallel US subpoenas.

Designing for the Amendment Cycle

What Makes Compliance Architecture Amendment-Resilient

Design ChoiceAmendment RiskResilient Alternative
Hard-coded Annex III exclusion ("not high-risk, never will be")High — reclassification breaks compliance claimsDynamic scope assessment reviewed against Annex updates
Conformity documentation stored on US cloudMedium — CLOUD Act exposure during Art.84 reportingEU-native storage, single legal order
Art.5 edge cases exploitedHigh — prohibition narrowing removes exceptionClear design distance from Art.5 categories
GPAI model at 8×10^24 FLOP (just below threshold)High — threshold shift triggers systemic risk obligationsTreat as potentially systemic risk, begin voluntary compliance

The Practical Calendar

2024-08-02  EU AI Act entered into force
2025-02-02  Art.5 prohibitions applied
2025-08-02  GPAI obligations applied
2026-08-02  Full enforcement (Art.9-15 high-risk obligations)
            ↓ 12 months enforcement data collection
2027-08-02  Art.85 Commission review report due
            ↓ 6-18 months legislative process
2028-2029   Potential Regulation amendments effective
            ↓ 2 year transition where specified
2031-08-02  Second Art.85 four-year review

For a developer building a high-risk AI product in 2026: your compliance window before potential amendment is approximately 24-36 months. This is long enough to build a sustainable compliance program, but not long enough to treat compliance as a one-time certification.

Python: Monitoring the Art.85 Review Pipeline

from datetime import date, timedelta
from dataclasses import dataclass
from typing import Literal

@dataclass
class Art85ReviewCalendar:
    """Track EU AI Act Art.85 review milestones."""
    
    regulation_entry_force: date = date(2024, 8, 2)
    first_review_due: date = date(2027, 8, 2)  # Art.85(1): 3 years
    second_review_due: date = date(2031, 8, 2)  # Art.85(2): every 4 years
    
    def days_to_first_review(self) -> int:
        return (self.first_review_due - date.today()).days
    
    def enforcement_data_window(self) -> tuple[date, date]:
        """Art.84 data period feeding Art.85 first review."""
        return (date(2026, 8, 2), date(2027, 7, 31))
    
    def is_in_critical_enforcement_window(self) -> bool:
        """Returns True if today is in the data period the Commission will evaluate."""
        start, end = self.enforcement_data_window()
        return start <= date.today() <= end
    
    def review_readiness_report(self) -> dict:
        return {
            "first_review_date": self.first_review_due.isoformat(),
            "days_to_review": self.days_to_first_review(),
            "critical_window_active": self.is_in_critical_enforcement_window(),
            "enforcement_data_period": {
                "start": self.enforcement_data_window()[0].isoformat(),
                "end": self.enforcement_data_window()[1].isoformat(),
            },
            "likely_amendment_effective": "2028-2029 (estimated)",
        }


class Art85ScopeRiskMonitor:
    """Assess reclassification risk from Art.85 Annex III review."""
    
    HIGH_RISK_EXPANSION_SIGNALS = [
        "recommender_system_critical_sector",
        "chatbot_high_stakes_context",
        "hiring_tool_ai_assisted",
        "foundation_model_regulated_sector",
    ]
    
    ART5_EDGE_CASES = [
        "workplace_analytics_emotion_adjacent",
        "biometric_adjacent_categorisation",
        "risk_scoring_criminal_context",
        "subliminal_adjacent_persuasion",
    ]
    
    def __init__(self, system_features: list[str]):
        self.features = set(system_features)
    
    def scope_expansion_risk(self) -> Literal["LOW", "MEDIUM", "HIGH"]:
        matches = self.features.intersection(self.HIGH_RISK_EXPANSION_SIGNALS)
        if len(matches) >= 2:
            return "HIGH"
        elif len(matches) == 1:
            return "MEDIUM"
        return "LOW"
    
    def art5_reclassification_risk(self) -> Literal["LOW", "MEDIUM", "HIGH"]:
        matches = self.features.intersection(self.ART5_EDGE_CASES)
        if matches:
            return "HIGH"
        return "LOW"
    
    def amendment_readiness_score(self) -> dict:
        return {
            "scope_expansion_risk": self.scope_expansion_risk(),
            "art5_reclassification_risk": self.art5_reclassification_risk(),
            "recommendation": self._generate_recommendation(),
        }
    
    def _generate_recommendation(self) -> str:
        if self.scope_expansion_risk() == "HIGH":
            return "Begin voluntary high-risk compliance now — scope expansion likely in 2027 review"
        if self.art5_reclassification_risk() == "HIGH":
            return "Redesign to create clear distance from Art.5 edge cases before 2026-08-02"
        return "Monitor Art.85 review outcomes — no immediate reclassification risk"


# Usage
calendar = Art85ReviewCalendar()
report = calendar.review_readiness_report()
print(f"First review: {report['first_review_date']}")
print(f"Days to review: {report['days_to_review']}")

monitor = Art85ScopeRiskMonitor(
    system_features=["hiring_tool_ai_assisted", "chatbot_high_stakes_context"]
)
risk = monitor.amendment_readiness_score()
print(f"Scope expansion risk: {risk['scope_expansion_risk']}")
print(f"Recommendation: {risk['recommendation']}")

Developer Checklist: Preparing for the Art.85 Review Cycle

Documentation (survivable across amendments):

Architecture (adaptable to Annex III expansion):

Monitoring (tracking the Art.85 process):

What Art.85 Does Not Change Before 2027

One critical point: Article 85 does not suspend current obligations. The August 2026 deadline for high-risk AI compliance is fixed. Article 85's first review is 12 months after full enforcement begins. During that 12 months:

The review does not create a grace period. It creates a regulatory update mechanism that developers should monitor — while complying fully with current obligations.


EU-native infrastructure that stays compliant across regulatory amendment cycles. Deploy on sota.io


See also: