EU AI Act Art.26 Obligations for Deployers: Developer Guide (High-Risk AI 2026)
EU AI Act Article 26 is the deployer compliance framework for high-risk AI systems. While the preceding articles in Chapter III Section 2 (Art.9–Art.22) primarily govern providers — the entities that build and place AI systems on the market — Art.26 defines what organisations that use high-risk AI systems under their own authority must do. For most engineering teams, this is the more immediately practical article: if your company integrates a third-party AI API for hiring decisions, credit scoring, medical diagnostics, or access to essential services, you are a deployer and Art.26 applies to you from August 2026.
Art.26 contains nine numbered paragraphs (Art.26(1)–(9)), each creating a distinct compliance obligation. The obligations range from procedural (follow the instructions for use), to notification-based (inform workers and affected natural persons), to architectural (build monitoring infrastructure, maintain operational logs, assess substantial modifications), to governance-level (conduct Fundamental Rights Impact Assessments before deploying public-authority AI). Understanding which paragraphs apply to your specific deployment context is the starting point for Art.26 compliance.
This guide covers Art.26(1)–(9) in full, the Art.26 × Art.13/14/12/27/46 intersection matrix, the critical boundary between deployer and provider (substantial modification), Python implementation for DeployerComplianceRecord, Art26MonitoringTracker, and SubstantialModificationAssessor, CLOUD Act jurisdiction risk for Art.26(7) operational logs, and the 40-item Art.26 compliance checklist.
Art.26 in the High-Risk AI Compliance Chain
Art.26 occupies the deployer obligation layer of Chapter III Section 2:
| Article | Obligation Layer | Primary Addressee |
|---|---|---|
| Art.9 | Risk management system | Provider |
| Art.10 | Training data governance | Provider |
| Art.11 | Technical documentation | Provider |
| Art.12 | Automatic event logging | Provider (system design) |
| Art.13 | Instructions for use | Provider (must produce) |
| Art.14 | Human oversight | Provider (design) + Deployer (implement) |
| Art.15 | Accuracy and robustness | Provider |
| Art.17 | Quality management system | Provider |
| Art.20 | Corrective actions | Provider (primary) + Deployer (notification) |
| Art.21 | MSA cooperation | All operators including Deployers |
| Art.22 | EU database registration | Provider + Deployer (public authorities) |
| Art.26 | Deployer obligations | Deployer |
| Art.27 | Fundamental rights impact assessment | Deployer (specific contexts) |
Art.26 is not a standalone document — it references and builds on provider obligations. Art.26(1) references Art.13 instructions for use. Art.26(4) connects to Art.14 human oversight monitoring. Art.26(7) connects to Art.12 logging. Art.26(8) triggers Art.27 FRIA. Understanding Art.26 requires understanding these upstream articles.
Who Is a Deployer Under Art.3(4)?
Before working through Art.26 obligations, the threshold question is whether your organisation is a "deployer" within the AI Act definition.
AI Act Art.3(4) definition: A deployer is "a natural or legal person, public authority, agency or other body using an AI system under its own authority, except where the AI system is used in the course of a personal non-professional activity."
Key elements:
- "Using an AI system": Operational use, not development. If you call an API to make a decision about a natural person, you are using it.
- "Under its own authority": You decide when, how, and in which context to deploy the system. A user who simply interacts with a product you built is not a deployer — you are.
- "Except personal non-professional": Consumer use of a chatbot is excluded. B2B use, B2C product deployment, and internal enterprise use are all covered.
Practical boundary: If your engineering team integrates a third-party AI system into a product that makes decisions about natural persons in an Annex III context (employment, credit, education, critical infrastructure, essential services, law enforcement, migration, administration of justice, democratic processes), you are a deployer of a high-risk AI system and Art.26 applies to you.
The provider/deployer distinction:
| Activity | Role |
|---|---|
| You build and train the AI model and place it on the market | Provider |
| You use a third-party AI API in your product's decision logic | Deployer |
| You fine-tune a provider's model on your data without substantially modifying its intended purpose | Deployer (Art.26 primary role) |
| You fine-tune and change the Annex III intended purpose | Provider (Art.26(6) substantial modification) |
Art.26(1): Instructions for Use Compliance
The obligation: Deployers shall use high-risk AI systems in accordance with the instructions for use produced by the provider under Art.13.
What this means in practice:
Art.13 requires providers to deliver instructions for use covering 7 mandatory content elements: system identity and characteristics, intended purpose and limitations, performance metrics, human oversight requirements, technical prerequisites, and change notification procedures. Art.26(1) creates the corresponding deployer obligation: you must actually follow those instructions.
Compliance implications:
-
Scope restriction: If the provider's Art.13 instructions specify "intended for use in EU employment contexts only," deploying the system for credit scoring is an Art.26(1) violation — even if the system technically works for credit scoring.
-
Technical prerequisite compliance: If instructions specify minimum hardware, network latency, or data input quality requirements, deployers must meet those prerequisites. Running a high-risk AI system on infrastructure that falls below the provider's specified technical prerequisites is non-compliance.
-
Human oversight implementation: Art.13(3)(e) requires providers to specify human oversight measures. Art.26(1) obligates deployers to implement them. "The instructions mentioned it but we didn't build it" is an Art.26(1) violation.
-
Prohibited uses: Art.13 instructions commonly specify contexts where the system must not be used. These prohibitions become Art.26(1) obligations. Document your review of the instructions for use and how your deployment satisfies them.
What deployers should build:
- An instructions-for-use compliance register mapping each Art.13 provision to a specific implementation decision in your system
- A prohibited-use documentation record confirming the deployment does not fall into any provider-specified excluded context
- A technical prerequisite audit confirming your infrastructure meets the provider's specifications
Art.26(2): Worker Information Obligation
The obligation: Deployers using high-risk AI systems in workplace contexts must inform workers' representatives and the workers themselves before deployment. The information obligation covers systems that monitor workers (employment monitoring), systems used for decisions affecting conditions of work (recruitment, promotion, dismissal, performance assessment, task allocation), and AI systems affecting working conditions.
Scope: Art.26(2) applies to Annex III Category 4 employment and workers management AI contexts. The affected persons are workers (employed under labour contracts) and their representatives (trade unions, works councils, etc.).
What must be disclosed:
- That a high-risk AI system will be used in a context affecting them
- The nature of the AI system's role in decisions
- The type of decisions it supports or takes
- The human oversight measures in place
- The right to explanation under Art.86 for significantly affected persons
Practical notes:
- "Before deployment" means before the system is put into operational use, not before purchase or integration
- Works council consultation may be required under national labour law independent of Art.26(2) — particularly in Germany (Betriebsverfassungsgesetz §87), France (Code du travail L2312-38), and the Netherlands (Wet op de Ondernemingsraden)
- Art.26(2) is not a consent requirement — it is an information requirement. Workers do not have a veto right under Art.26(2) alone
Documentation requirement: Maintain records of the notification, timing, channel, and content. MSA investigations under Art.21 may request evidence of Art.26(2) compliance.
Art.26(3): Natural Person Information Obligation
The obligation: Deployers using high-risk AI systems that make or assist decisions about natural persons (outside employment) must notify those persons that they are subject to an AI-assisted decision.
Scope: This applies to Annex III contexts including credit scoring, education and training, access to essential services, law enforcement, migration and asylum, and administration of justice. In practice: if your AI system decides whether a person gets a loan, is admitted to a programme, receives benefits, or is processed in a law enforcement context, Art.26(3) notification is required.
What must be disclosed:
- That an AI system will be used in the context
- The nature of the AI system's role in the decision
- The right to an explanation for significantly affected decisions (Art.86)
Key intersection — GDPR Art.22:
Where the AI decision is a "solely automated decision with significant legal or similar effects" under GDPR Art.22, GDPR creates a right to human review, object, and obtain an explanation independent of Art.26(3). The two regimes overlap but are not identical:
| Requirement | GDPR Art.22 | AI Act Art.26(3) |
|---|---|---|
| Scope | Solely automated, significant effects | High-risk AI systems with decisions about persons |
| Safeguard type | Right to human review + objection | Information + right to explanation |
| Timing | At decision point | Before or at the start of the interaction |
| Enforcer | Data protection authorities | Market surveillance authorities |
Deployers subject to both should produce a single combined disclosure that satisfies both regimes.
Practical implementation:
- Embed disclosure in the user-facing flow before AI-assisted decisions are triggered
- Use clear, non-technical language (AI Act Art.50 plain language principle applies by analogy)
- Document the disclosure template, timing, and delivery channel
Art.26(4): Monitoring and Feedback Obligation
The obligation: Deployers must monitor the operation of the high-risk AI system based on the provider's instructions for use and report any risks to health, safety, or fundamental rights, or any serious incidents under Art.73, to the provider, importer, or distributor, as appropriate.
This is the deployer's operational watchdog obligation. While providers bear responsibility for system design, deployers are the entities running the system in the real world. Art.26(4) creates a formal obligation to monitor for divergence between intended and actual behaviour and to report upward.
What "monitoring" means operationally:
- Performance drift detection: Is the system's accuracy or output distribution changing over time in ways that exceed the provider's specified bounds?
- Out-of-distribution input detection: Are inputs being provided to the system that fall outside the data distribution described in Art.13 documentation?
- Anomaly flagging: Are outputs that trigger Art.14 human oversight being captured and reviewed?
- Serious incident identification: Are system outputs causing or contributing to health, safety, or fundamental rights harms meeting the Art.3(49) serious incident definition?
Upstream reporting cascade:
When a deployer identifies a risk or serious incident under Art.26(4), the notification must flow to the appropriate party:
Deployer identifies risk/serious incident
↓
Is there a direct contractual relationship with the provider?
YES → Notify provider directly (Art.26(4))
NO → Notify importer or distributor in the supply chain
↓
Provider assesses under Art.20 (corrective action) + Art.73 (serious incident reporting to MSA)
Deployer obligation vs. provider obligation: Deployers are not required to file direct serious incident reports with MSAs under Art.73 — that is a provider obligation under Art.73(1). However, the Art.26(4) reporting obligation to the provider activates the provider's Art.73 reporting chain. Deployers should maintain records of Art.26(4) reports sent, including timestamp, channel, and provider acknowledgment.
Art.26(5): Bias Monitoring Assistance
The obligation: Deployers who have relevant data access shall assist the provider in monitoring for bias. Where the deployer has access to data that is relevant to the AI system's training data requirements under Art.10, they shall provide that data upon request by the provider for bias examination purposes.
Context: This is a collaborative obligation. Providers bear the primary data governance responsibility under Art.10, but deployers operating AI in specific contexts often have better access to real-world outcome data — which is the most valuable input for bias detection. Art.26(5) formalises this as an obligation when relevant data is available.
When this triggers:
- You have outcome data (did the AI-recommended decision prove correct in your operational context?)
- You have demographic data linked to AI outcomes (subject to GDPR constraints)
- The provider requests this data for Art.10(4) bias examination purposes
Data protection intersection: Sharing person-linked outcome data with providers for bias monitoring may require GDPR legal basis (typically Art.6(1)(c) legal obligation or Art.6(1)(f) legitimate interests). Deployers should include a data-sharing provision for Art.26(5) purposes in provider contracts and Data Processing Agreements.
Art.26(6): Substantial Modification → Deployer Becomes Provider
This is the most architecturally significant Art.26 provision for engineering teams.
The obligation: Where a deployer makes a substantial modification to a high-risk AI system, the deployer shall be considered the provider for the purposes of Chapter III Section 2 and shall be subject to the provider obligations under Art.16 (not just deployer obligations under Art.26).
What is a "substantial modification"?
AI Act Art.3(23) defines substantial modification as "a change to the high-risk AI system after its placing on the market or putting into service which affects the compliance of the high-risk AI system with the requirements set out in Chapter III Section 2 or results in a change to the intended purpose for which the high-risk AI system has been assessed."
Practical bright-line tests:
| Modification Type | Substantial? | Result |
|---|---|---|
| Fine-tuning on domain-specific data within the same intended purpose | No | Deployer obligations continue |
| Fine-tuning that changes the system's performance metrics beyond the provider's declared bounds | Yes | Deployer → new Provider |
| Changing the model's intended purpose (e.g., from employment screening to credit scoring) | Yes | Deployer → new Provider |
| Extending the system to new Annex III categories | Yes | Deployer → new Provider |
| Integration with additional data sources that alter system behaviour | Context-dependent | Assess case by case |
| UI customisation, threshold changes within provider's allowed range | No | Deployer obligations continue |
| Prompt engineering within provider's system prompt specifications | No | Deployer obligations continue |
If the substantial modification threshold is crossed, the former deployer must:
- Conduct a full conformity assessment under Art.43
- Prepare technical documentation under Art.11 (Annex IV)
- Draw up a declaration of conformity under Art.48
- Affix CE marking under Art.49
- Register in the EU database under Art.22
- Implement all provider obligations under Art.16
Engineering implication: Teams building on top of AI APIs must assess every architectural change for substantial modification risk. Changes that seem minor (new data inputs, output calibration, expanded scope) can cross the threshold and transform compliance obligations entirely.
Art.26(7): Operational Logging Obligation
The obligation: Deployers shall keep the logs automatically generated by the high-risk AI system for a period of at least 6 months unless otherwise provided under applicable EU or national law. This obligation applies where the deployer has control over the logs.
This is the deployer's direct logging obligation, complementing the provider's Art.12 logging design obligation. The distinction:
| Obligation | Who Bears It | What It Covers |
|---|---|---|
| Art.12 | Provider (system design) | Automatic event logging built into the system |
| Art.26(7) | Deployer (operational retention) | Retention of logs the system generates |
What "logs automatically generated" means:
Logs generated by the AI system itself under Art.12 — inference events, human oversight triggers, confidence scores, input-output pairs, override events — as distinct from application-level logs generated by the deployer's own infrastructure. Both must be retained, but Art.26(7) specifically refers to system-generated logs.
Retention timeline:
- Minimum: 6 months
- Law enforcement context: Subject to applicable EU/national law — Directive 2016/680 for law enforcement AI may specify different retention periods
- Biometric systems: GDPR Art.9 special category data restrictions apply to biometric log content — retention must be justified under Art.9(2) legal basis
CLOUD Act intersection — Critical for EU-native compliance:
Art.26(7) logs are operational evidence. If a deployer stores these logs on US-headquartered cloud infrastructure, those logs are subject to CLOUD Act compellability by US government agencies independent of the EU AI Act's confidentiality framework. This creates:
- A parallel access path for investigation-relevant operational data
- A potential conflict between Art.78 EU AI Act confidentiality protections and CLOUD Act compelled disclosure
- An audit risk: MSA investigations may request Art.26(7) logs simultaneously with CLOUD Act production orders to the infrastructure provider
Deployers storing Art.26(7) logs on EU-native infrastructure operate under a single EU regulatory regime for those logs, eliminating the parallel-access problem.
Art.26(8): FRIA Trigger for Public Authorities
The obligation: Deployers that are public authorities, or private entities acting on behalf of public authorities, who deploy high-risk AI systems in the contexts listed in Art.27 must conduct a Fundamental Rights Impact Assessment (FRIA) under Art.27 before putting the system into service.
Who this applies to:
- Public authorities at any level (national, regional, municipal)
- Private entities performing public functions (contracted service providers to public authorities, public hospitals, etc.)
The Annex III contexts triggering FRIA:
Art.27(1) lists the specific Annex III categories requiring FRIA:
- Annex III Category 1: Biometric systems
- Annex III Category 4: Employment and workers management
- Annex III Category 5: Access to essential private services and public services and benefits
- Annex III Category 6: Law enforcement
- Annex III Category 7: Migration, asylum, and border control management
- Annex III Category 8: Administration of justice and democratic processes
FRIA content requirements (Art.27(1)):
- Description of the intended use of the high-risk AI system
- Geographic and temporal scope of deployment
- Assessment of risks to fundamental rights based on Art.9 risk management documentation
- Identification of affected natural persons and groups at elevated risk
- Description of the measures to address the identified risks
- Description of human oversight measures under Art.14
- Reference to the national supervisory body if applicable
Timing: The FRIA must be completed and registered before deployment — not after a trial period. "We'll assess the impact once we see how it works" is not Art.27 compliance.
Art.26(9): Law Enforcement Derogation
The obligation: Deployers who are competent authorities for law enforcement purposes may be subject to specific national law derogations from Art.26 obligations, consistent with EU law (in particular Directive 2016/680 for law enforcement data processing).
Practical note for non-law-enforcement deployers: Art.26(9) is not available to private sector deployers or public authorities outside law enforcement functions. All Art.26(1)–(8) obligations apply in full.
Art.26 × Art.27 FRIA: Full Framework
For public authority deployers, Art.26(8) and Art.27 work as a combined FRIA framework:
High-Risk AI Deployment by Public Authority (or private entity for public function)
↓
Is the use case in Art.27(1) Annex III list?
YES → Art.27 FRIA required before deployment
NO → Art.27 FRIA not required; Art.26(1)-(7) still apply
↓
Art.27 FRIA Process:
1. Document intended use, scope, affected persons
2. Risk assessment referencing provider's Art.9 documentation
3. Identify elevated-risk groups (Art.7 EU Charter)
4. Map mitigation measures
5. Document human oversight (Art.14 implementation)
6. Notify national supervisory body if required by national law
↓
Register FRIA completion with Art.22 EU database
(Deployer registration under Art.22(3) for public authority deployers)
Python Implementation
DeployerComplianceRecord
from dataclasses import dataclass, field
from datetime import datetime, timedelta
from typing import Optional
from enum import Enum
class AnnexIIICategory(Enum):
BIOMETRIC = "biometric_identification"
CRITICAL_INFRASTRUCTURE = "critical_infrastructure"
EDUCATION = "education_training"
EMPLOYMENT = "employment_workers_management"
ESSENTIAL_SERVICES = "essential_private_public_services"
LAW_ENFORCEMENT = "law_enforcement"
MIGRATION = "migration_asylum_border"
JUSTICE = "administration_of_justice"
class DeployerType(Enum):
PRIVATE = "private"
PUBLIC_AUTHORITY = "public_authority"
PRIVATE_FOR_PUBLIC = "private_acting_for_public"
class Art26ComplianceStatus(Enum):
PENDING = "pending"
IN_PROGRESS = "in_progress"
COMPLIANT = "compliant"
NON_COMPLIANT = "non_compliant"
@dataclass
class DeployerComplianceRecord:
"""
EU AI Act Art.26 compliance record for a deployer of a high-risk AI system.
Tracks all nine Art.26 obligations plus the Art.27 FRIA requirement.
"""
deployment_id: str
system_id: str # References provider's AI system
provider_name: str
provider_eu_database_id: str # Art.22 registration ID
deployer_type: DeployerType
annex_iii_categories: list[AnnexIIICategory]
deployment_date: datetime
# Art.26(1) — Instructions for use compliance
ifu_reviewed: bool = False
ifu_review_date: Optional[datetime] = None
ifu_prohibited_uses_cleared: bool = False
ifu_technical_prerequisites_met: bool = False
ifu_human_oversight_implemented: bool = False
# Art.26(2) — Worker information (employment contexts)
worker_notification_required: bool = False
worker_notification_completed: bool = False
worker_notification_date: Optional[datetime] = None
works_council_consulted: bool = False
# Art.26(3) — Natural person information
person_notification_required: bool = False
person_notification_template_approved: bool = False
notification_channel: Optional[str] = None
# Art.26(4) — Monitoring and feedback
monitoring_infrastructure_deployed: bool = False
monitoring_last_review: Optional[datetime] = None
reports_sent_to_provider: int = 0
# Art.26(5) — Bias monitoring assistance
bias_data_available: bool = False
bias_data_sharing_agreement_signed: bool = False
# Art.26(6) — Substantial modification assessment
substantial_modification_assessed: bool = False
substantial_modification_detected: bool = False
substantial_modification_date: Optional[datetime] = None
became_provider: bool = False
# Art.26(7) — Operational log retention
log_retention_infrastructure: Optional[str] = None
log_retention_months: int = 6
logs_stored_eu_native: bool = False
# Art.26(8) — FRIA (public authorities)
fria_required: bool = False
fria_completed: bool = False
fria_completion_date: Optional[datetime] = None
fria_supervisory_body_notified: bool = False
def art26_compliance_score(self) -> dict:
checks = {
"ifu_reviewed": self.ifu_reviewed,
"ifu_prohibited_uses_cleared": self.ifu_prohibited_uses_cleared,
"ifu_prerequisites_met": self.ifu_technical_prerequisites_met,
"ifu_oversight_implemented": self.ifu_human_oversight_implemented,
"worker_notification_ok": (
not self.worker_notification_required or self.worker_notification_completed
),
"person_notification_ok": (
not self.person_notification_required or self.person_notification_template_approved
),
"monitoring_deployed": self.monitoring_infrastructure_deployed,
"substantial_modification_assessed": self.substantial_modification_assessed,
"logs_retained": self.log_retention_infrastructure is not None,
"fria_ok": (not self.fria_required or self.fria_completed),
}
passed = sum(checks.values())
return {
"score": passed,
"total": len(checks),
"percentage": round(passed / len(checks) * 100, 1),
"failed": [k for k, v in checks.items() if not v],
}
def is_deployment_authorized(self) -> bool:
"""Returns True only when all pre-deployment obligations are satisfied."""
if self.fria_required and not self.fria_completed:
return False
if not self.ifu_reviewed or not self.ifu_prohibited_uses_cleared:
return False
if self.worker_notification_required and not self.worker_notification_completed:
return False
return True
Art26MonitoringTracker
from dataclasses import dataclass, field
from datetime import datetime
from typing import Optional
from enum import Enum
class MonitoringEventType(Enum):
PERFORMANCE_DRIFT = "performance_drift"
OOD_INPUT = "out_of_distribution_input"
HUMAN_OVERSIGHT_TRIGGER = "human_oversight_trigger"
SERIOUS_INCIDENT = "serious_incident"
ANOMALY = "anomaly"
BIAS_INDICATOR = "bias_indicator"
class ReportStatus(Enum):
IDENTIFIED = "identified"
REPORTED_TO_PROVIDER = "reported_to_provider"
PROVIDER_ACKNOWLEDGED = "provider_acknowledged"
RESOLVED = "resolved"
@dataclass
class MonitoringEvent:
event_id: str
deployment_id: str
event_type: MonitoringEventType
detected_at: datetime
description: str
affected_persons_count: Optional[int] = None
health_safety_risk: bool = False
fundamental_rights_risk: bool = False
report_status: ReportStatus = ReportStatus.IDENTIFIED
reported_to_provider_at: Optional[datetime] = None
provider_ticket_id: Optional[str] = None
resolved_at: Optional[datetime] = None
def requires_art73_reporting(self) -> bool:
"""Serious incidents require provider Art.73 MSA reporting."""
return self.event_type == MonitoringEventType.SERIOUS_INCIDENT
def is_overdue_for_reporting(self, hours_threshold: int = 24) -> bool:
if self.report_status == ReportStatus.IDENTIFIED:
age = (datetime.utcnow() - self.detected_at).total_seconds() / 3600
return age > hours_threshold
return False
class Art26MonitoringTracker:
"""Tracks Art.26(4) monitoring events for a deployer's AI system portfolio."""
def __init__(self):
self._events: list[MonitoringEvent] = []
def record_event(self, event: MonitoringEvent) -> str:
self._events.append(event)
return event.event_id
def unreported_events(self) -> list[MonitoringEvent]:
return [e for e in self._events if e.report_status == ReportStatus.IDENTIFIED]
def serious_incidents(self) -> list[MonitoringEvent]:
return [e for e in self._events if e.requires_art73_reporting()]
def overdue_reports(self, hours_threshold: int = 24) -> list[MonitoringEvent]:
return [e for e in self._events if e.is_overdue_for_reporting(hours_threshold)]
def events_for_deployment(self, deployment_id: str) -> list[MonitoringEvent]:
return [e for e in self._events if e.deployment_id == deployment_id]
def monitoring_summary(self) -> dict:
return {
"total_events": len(self._events),
"unreported": len(self.unreported_events()),
"serious_incidents": len(self.serious_incidents()),
"overdue_reports": len(self.overdue_reports()),
}
SubstantialModificationAssessor
from dataclasses import dataclass
from typing import Optional
from enum import Enum
class ModificationRisk(Enum):
NO_RISK = "no_risk"
LOW = "low"
MEDIUM = "medium"
HIGH = "high"
CONFIRMED_SUBSTANTIAL = "confirmed_substantial"
@dataclass
class ModificationAssessment:
"""
EU AI Act Art.26(6) × Art.3(23) substantial modification assessment.
A confirmed substantial modification transforms the deployer into a new provider.
"""
assessment_id: str
system_id: str
modification_description: str
# Art.3(23) test criteria
affects_intended_purpose: bool = False
extends_annex_iii_categories: bool = False
changes_performance_beyond_declared_bounds: bool = False
changes_training_data_distribution: bool = False
modifies_architecture_or_weights: bool = False
ui_only_change: bool = False
threshold_change_within_provider_range: bool = False
legal_counsel_reviewed: bool = False
assessment_date: Optional[str] = None
assessor: Optional[str] = None
def risk_level(self) -> ModificationRisk:
if self.ui_only_change and self.threshold_change_within_provider_range:
return ModificationRisk.NO_RISK
substantial_indicators = [
self.affects_intended_purpose,
self.extends_annex_iii_categories,
self.changes_performance_beyond_declared_bounds,
self.modifies_architecture_or_weights,
]
count = sum(substantial_indicators)
if count >= 2:
return ModificationRisk.CONFIRMED_SUBSTANTIAL
elif count == 1:
return ModificationRisk.HIGH
elif self.changes_training_data_distribution:
return ModificationRisk.MEDIUM
return ModificationRisk.LOW
def requires_new_conformity_assessment(self) -> bool:
return self.risk_level() in {
ModificationRisk.HIGH,
ModificationRisk.CONFIRMED_SUBSTANTIAL
}
def deployer_becomes_provider(self) -> bool:
return self.risk_level() == ModificationRisk.CONFIRMED_SUBSTANTIAL
def required_next_steps(self) -> list[str]:
steps = []
level = self.risk_level()
if level == ModificationRisk.CONFIRMED_SUBSTANTIAL:
steps += [
"Conduct Art.43 conformity assessment as new Provider",
"Prepare Art.11 Annex IV technical documentation",
"Draw up Art.48 declaration of conformity",
"Affix Art.49 CE marking",
"Register in Art.22 EU database as Provider",
"Implement all Art.16 provider obligations",
]
elif level == ModificationRisk.HIGH:
steps += [
"Seek legal counsel on Art.3(23) substantial modification determination",
"Document modification scope and impact assessment",
"Notify provider — Art.20 corrective action may apply",
]
elif level == ModificationRisk.MEDIUM:
steps += [
"Document modification and data distribution change",
"Monitor for performance deviation from Art.15 declared bounds",
"Update Art.26(4) monitoring configuration",
]
return steps
Art.26 Compliance Checklist (40 Items)
Art.26(1) — Instructions for Use Compliance
- ☐ Provider's Art.13 instructions for use obtained and version-controlled
- ☐ Instructions-for-use compliance register created — each Art.13 provision mapped to deployment decision
- ☐ All intended-purpose restrictions reviewed — deployment scope confirmed within provider's authorised use
- ☐ Prohibited uses documented — deployment confirmed outside all provider-specified prohibited contexts
- ☐ Technical prerequisites audit completed — infrastructure meets provider's specified requirements
- ☐ Human oversight measures specified in Art.13(3)(e) implemented and documented
Art.26(2) — Worker Information
- ☐ Employment AI context assessed — does this deployment affect workers?
- ☐ Worker notification obligation determined (YES/NO for this deployment)
- ☐ Worker representatives (trade union, works council) notified before deployment if applicable
- ☐ Individual worker notification completed before system operational use
- ☐ National labour law consultation requirements assessed (Germany §87 BetrVG, France L2312-38, Netherlands WOR)
- ☐ Notification records retained (timing, channel, content, acknowledgment)
Art.26(3) — Natural Person Information
- ☐ Decision context assessed — does system make or assist decisions about natural persons?
- ☐ Natural person notification template created and approved
- ☐ GDPR Art.22 overlay assessed — is a solely automated decision with significant effects involved?
- ☐ Combined GDPR Art.22 + AI Act Art.26(3) disclosure drafted if both apply
- ☐ Art.86 right to explanation disclosure included in notification
- ☐ Notification delivery channel and timing documented
Art.26(4) — Monitoring and Feedback
- ☐ Monitoring infrastructure deployed for performance drift detection
- ☐ Out-of-distribution input detection implemented
- ☐ Serious incident identification criteria documented against Art.3(49) definition
- ☐ Upstream reporting procedure to provider/importer established with SLA
- ☐ Art.26(4) report log maintained with timestamps and provider acknowledgments
- ☐ Monitoring review cadence established (minimum quarterly, ideally monthly)
Art.26(5) — Bias Monitoring
- ☐ Relevant outcome data assessed — do deployer's data assets support provider's bias monitoring?
- ☐ Data sharing agreement for Art.26(5) bias data included in provider contract if applicable
- ☐ GDPR legal basis for sharing person-linked outcome data assessed
Art.26(6) — Substantial Modification Assessment
- ☐ Substantial modification assessment process documented using Art.3(23) criteria
- ☐ All system modifications reviewed against the assessment before deployment
- ☐ Records of "no substantial modification" determinations retained with rationale
- ☐ Legal counsel review triggered for any HIGH-risk modification determination
- ☐ If substantial modification confirmed: provider obligations under Art.16 assumed
Art.26(7) — Operational Logging
- ☐ Art.12 system-generated logs retained for minimum 6 months
- ☐ Log retention infrastructure identified and configured
- ☐ Log storage jurisdiction assessed — EU-native storage preferred to eliminate CLOUD Act exposure
- ☐ GDPR Art.9 constraints assessed for biometric log content
- ☐ Log retention schedule documented with applicable exceptions (law enforcement, longer periods)
Art.26(8) — FRIA
- ☐ Deployer type assessed — public authority or private entity performing public function?
- ☐ Annex III category assessed against Art.27(1) FRIA trigger list
- ☐ If FRIA required: Art.27 assessment completed before deployment and registered in Art.22 EU database
Enforcement Exposure
Art.26 violations are sanctionable under AI Act Art.99:
- Art.99(3): Failure to comply with deployer obligations under Art.26: up to €15 million or 3% of global annual turnover (whichever is higher for SMEs; whichever is higher for large enterprises)
- Art.99(4): Providing incorrect, incomplete, or misleading information to authorities: up to €7.5 million or 1%
Notable enforcement dynamics for deployers:
-
Art.26(6) substantial modification is a high-consequence gap. A deployer who fine-tunes a model beyond the substantial modification threshold and does not assume provider obligations is simultaneously non-compliant with Art.26(6) and all of Chapter III Section 2. Enforcement can address both failures.
-
Art.26(2) worker notification failures are directly visible to enforcement. Unlike technical gaps that require investigation, a failure to notify workers' councils is documented in national labour law records and may be reported directly by affected workers.
-
Art.26(7) log retention gaps appear immediately in MSA investigations. When an Art.21 cooperation request arrives, missing Art.26(7) logs are immediately apparent — and the 6-month window is short enough that a one-cycle delay in compliance means the earliest investigation-relevant logs are already gone.
-
Art.26(8) FRIA failures block deployment legitimacy. A public authority deploying a high-risk AI system without completing the Art.27 FRIA is not merely non-compliant — the deployment itself is unauthorised under the AI Act framework.
What to Do Now
If you're a private sector deployer (August 2026 deadline):
- Inventory your high-risk AI integrations now. List every third-party AI API or system your products use that could fall within Annex III categories. Assume you are a deployer until proven otherwise.
- Review provider Art.13 instructions for use. If your provider has not yet published Art.13-compliant instructions, request them. Contract for delivery before the August 2026 deadline.
- Build the Art.26(6) substantial modification assessment into your ML change management process. Every model update, fine-tuning run, and scope expansion should be assessed against Art.3(23) before deployment.
- Deploy monitoring for Art.26(4) compliance. Monitoring infrastructure that tracks performance drift, anomalies, and serious incident indicators must be operational before the August 2026 deadline — not planned for Q3 2026.
- Assess log storage jurisdiction for Art.26(7). If your operational logs are currently stored on US-headquartered infrastructure, evaluate EU-native alternatives to eliminate CLOUD Act parallel access risk to Art.26(7) evidence.
If you're a public authority deployer:
- Start Art.27 FRIA immediately for any planned high-risk AI deployment. FRIA completion is a pre-deployment prerequisite. Given procurement timelines, FRIA work should start at the procurement specification stage, not after system delivery.
- Register Art.22(3) deployer registration in the EU AI Office database before putting the system into service.
- Ensure worker notification under Art.26(2) is built into your HR change management processes. Public sector labour law requirements may be more stringent than private sector equivalents.
See Also
- EU AI Act Art.29 Obligations for Providers of General-Purpose AI Models: Developer Guide — GPAI provider obligations under Art.53/55 that deployers integrating GPAI models must understand
- EU AI Act Art.22 EU Database of High-Risk AI Systems: Developer Guide
- EU AI Act Art.21 Cooperation with Competent Authorities: Developer Guide
- EU AI Act Art.14 Human Oversight: Developer Guide
- EU AI Act Art.13 Transparency Obligations: Developer Guide
- EU AI Act Art.12 Logging & Record-Keeping: Developer Guide